Accepted: EANM2022
Aim/Introduction: Coronary Artery Disease (CAD) exhibits the highest mortality rate among various heart diseases. The severity of CAD has
enforced research to provide support from computer-aided systems of near-optimal accuracy in image classification and anomaly detection
functionalities. Our research addresses the three-class image classification in CAD to detect normal state, ischemia or infarction, employing
two methods: (a) an RGB-CNN (Convolutional Neural Network) and (b) transfer learning. Both techniques are proved suitable for nuclear
medical imaging classification. Materials and Methods: This paper proposes an RGB-CNN model illustrating robustness, efficiency and
generability, to automatically classify SPECT-MPI images, by generating patterns and extracting high-level features, in the absence of clinical
data. Then, transfer learning was applied to compare model reasoning outcomes. Classification knowledge was achieved via pre-trained
networks. To test the aforementioned RGB-CNN, 647 in total cases were initially classified by a nuclear expert into 262 normal, 251 ischemic
and 134 infarcted respectively. The method utilized data augmentation to increase the size of the training dataset. Patients were examined
under stress/rest, for the detection of possible CAD symptoms. The dataset was shuffled and split, in the following percentage: 85% for
training and 15% for testing, while 15% of training corresponded to validation. The experiment was conducted in Google’s Colab platform,
which provides GPU assistance, for the DL processing. An in-depth analysis was performed using a variety of pixel sizes, batch sizes, number
of nodes (including the case of dense layers) and layers of convolution. Furthermore, 10-fold cross-validation was performed to assess
CNN’s results. Results: A thorough CNN exploration highlighted as best a DL model of the following specifications: 250x250x3 image size,
16 batch size, four convolutional layers with 16-32-64-128 nodes, two dense layers of 128-128 nodes, and drop rate of value 0.2. All
experiments were executed for 400 epochs. The best CNN achieved accuracy 87,4545% (±2.499) and loss of value 0,34 undergoing a 10-fold
cross validation process. Furthermore, a comparative analysis with the VGG-16 and DenseNet pre-trained networks was performed, using
the same cross validation method. The corresponding results for the VGG-16 were 87,49% (±3,12) accuracy and 0,39 loss while DenseNet-
121 achieved 82,88% (±2,12) accuracy and 0,42 loss. Conclusion: The proposed model demonstrated high potential and capabilities for
future studies in the field of nuclear heart images classification, as it extracted respectable results achieving quite similar accuracy,
minimized loss to pre-trained networks, which were trained in a larger number of images.