Task-Driven Dictionary Learning Based on Mutual Information for Medical Image Classification

Idit Diamant, Eyal Klang, Michal Amitai, Eli Konen, Jacob Goldberger, Hayit Greenspan*

*Corresponding author for this work

Research output: Contribution to journalArticlepeer-review

32 Scopus citations

Abstract

Objective: We present a novel variant of the bag-of-visual-words (BoVW) method for automated medical image classification. Methods: Our approach improves the BoVW model by learning a task-driven dictionary of the most relevant visual words per task using a mutual information-based criterion. Additionally, we generate relevance maps to visualize and localize the decision of the automatic classification algorithm. These maps demonstrate how the algorithm works and show the spatial layout of the most relevant words. Results: We applied our algorithm to three different tasks: chest x-ray pathology identification (of four pathologies: cardiomegaly, enlarged mediastinum, right consolidation, and left consolidation), liver lesion classification into four categories in computed tomography (CT) images and benign/malignant clusters of microcalcifications (MCs) classification in breast mammograms. Validation was conducted on three datasets: 443 chest x-rays, 118 portal phase CT images of liver lesions, and 260 mammography MCs. The proposed method improves the classical BoVW method for all tested applications. For chest x-ray, area under curve of 0.876 was obtained for enlarged mediastinum identification compared to 0.855 using classical BoVW (with p-value <; 0.01). For MC classification, a significant improvement of 4% was achieved using our new approach (with p-value = 0.03). For liver lesion classification, an improvement of 6% in sensitivity and 2% in specificity were obtained (with p-value <; 0.001). Conclusion: We demonstrated that classification based on informative selected set of words results in significant improvement. Significance: Our new BoVW approach shows promising results in clinically important domains. Additionally, it can discover relevant parts of images for the task at hand without explicit annotations for training data. This can provide computer-aided support for medical experts in challenging image analysis tasks.

Original languageEnglish
Article number7558227
Pages (from-to)1380-1392
Number of pages13
JournalIEEE Transactions on Biomedical Engineering
Volume64
Issue number6
DOIs
StatePublished - Jun 2017

Keywords

  • Automated diagnosis
  • classification
  • dictionary
  • liver lesions
  • microcalcifications (MCs)
  • mutual information (MI)
  • relevance maps
  • visual words

Fingerprint

Dive into the research topics of 'Task-Driven Dictionary Learning Based on Mutual Information for Medical Image Classification'. Together they form a unique fingerprint.

Cite this