دورية أكاديمية

Comparison between vision transformers and convolutional neural networks to predict non-small lung cancer recurrence.

التفاصيل البيبلوغرافية
العنوان: Comparison between vision transformers and convolutional neural networks to predict non-small lung cancer recurrence.
المؤلفون: Fanizzi, Annarita, Fadda, Federico, Comes, Maria Colomba, Bove, Samantha, Catino, Annamaria, Di Benedetto, Erika, Milella, Angelo, Montrone, Michele, Nardone, Annalisa, Soranno, Clara, Rizzo, Alessandro, Guven, Deniz Can, Galetta, Domenico, Massafra, Raffaella
المصدر: Scientific Reports; 11/23/2023, Vol. 13 Issue 1, p1-10, 10p
مصطلحات موضوعية: TRANSFORMER models, DEEP learning, CONVOLUTIONAL neural networks, CANCER relapse, MACHINE learning, LUNG cancer
مستخلص: Non-Small cell lung cancer (NSCLC) is one of the most dangerous cancers, with 85% of all new lung cancer diagnoses and a 30–55% of recurrence rate after surgery. Thus, an accurate prediction of recurrence risk in NSCLC patients during diagnosis could be essential to drive targeted therapies preventing either overtreatment or undertreatment of cancer patients. The radiomic analysis of CT images has already shown great potential in solving this task; specifically, Convolutional Neural Networks (CNNs) have already been proposed providing good performances. Recently, Vision Transformers (ViTs) have been introduced, reaching comparable and even better performances than traditional CNNs in image classification. The aim of the proposed paper was to compare the performances of different state-of-the-art deep learning algorithms to predict cancer recurrence in NSCLC patients. In this work, using a public database of 144 patients, we implemented a transfer learning approach, involving different Transformers architectures like pre-trained ViTs, pre-trained Pyramid Vision Transformers, and pre-trained Swin Transformers to predict the recurrence of NSCLC patients from CT images, comparing their performances with state-of-the-art CNNs. Although, the best performances in this study are reached via CNNs with AUC, Accuracy, Sensitivity, Specificity, and Precision equal to 0.91, 0.89, 0.85, 0.90, and 0.78, respectively, Transformer architectures reach comparable ones with AUC, Accuracy, Sensitivity, Specificity, and Precision equal to 0.90, 0.86, 0.81, 0.89, and 0.75, respectively. Based on our preliminary experimental results, it appears that Transformers architectures do not add improvements in terms of predictive performance to the addressed problem. [ABSTRACT FROM AUTHOR]
Copyright of Scientific Reports is the property of Springer Nature and its content may not be copied or emailed to multiple sites or posted to a listserv without the copyright holder's express written permission. However, users may print, download, or email articles for individual use. This abstract may be abridged. No warranty is given about the accuracy of the copy. Users should refer to the original published version of the material for the full abstract. (Copyright applies to all Abstracts.)
قاعدة البيانات: Complementary Index
الوصف
تدمد:20452322
DOI:10.1038/s41598-023-48004-9