Pre-Training Transformers for Fingerprinting to Improve Stress Prediction in fMRI

Gony Rosenman, Itzik Malkiel, Ayam Greental, Talma Hendler, Lior Wolf

Research output: Contribution to journalConference articlepeer-review


We harness a Transformer-based model and a pre-training procedure for fingerprinting on fMRI data, to enhance the accuracy of stress predictions. Our model, called MetricFMRI, first optimizes a pixel-based reconstruction loss. In a second unsupervised training phase, a triplet loss is used to encourage fMRI sequences of the same subject to have closer representations, while sequences from different subjects are pushed away from each other. Finally, supervised learning is used for the target task, based on the learned representation. We evaluate the performance of our model and other alternatives and conclude that the triplet training for the fingerprinting task is key to the improved accuracy of our method for the task of stress prediction. To obtain insights regarding the learned model, gradient-based explainability techniques are used, indicating that sub-cortical brain regions that are known to play a central role in stress-related processes are highlighted by the model.

Original languageEnglish
Pages (from-to)212-234
Number of pages23
JournalProceedings of Machine Learning Research
StatePublished - 2023
Event6th International Conference on Medical Imaging with Deep Learning, MIDL 2023 - Nashville, United States
Duration: 10 Jul 202312 Jul 2023


  • fMRI
  • Metric-Learning
  • Transformers


Dive into the research topics of 'Pre-Training Transformers for Fingerprinting to Improve Stress Prediction in fMRI'. Together they form a unique fingerprint.

Cite this