دورية أكاديمية

Comparison between vision transformers and convolutional neural networks to predict non-small lung cancer recurrence

التفاصيل البيبلوغرافية
العنوان: Comparison between vision transformers and convolutional neural networks to predict non-small lung cancer recurrence
المؤلفون: Annarita Fanizzi, Federico Fadda, Maria Colomba Comes, Samantha Bove, Annamaria Catino, Erika Di Benedetto, Angelo Milella, Michele Montrone, Annalisa Nardone, Clara Soranno, Alessandro Rizzo, Deniz Can Guven, Domenico Galetta, Raffaella Massafra
المصدر: Scientific Reports, Vol 13, Iss 1, Pp 1-10 (2023)
بيانات النشر: Nature Portfolio, 2023.
سنة النشر: 2023
المجموعة: LCC:Medicine
LCC:Science
مصطلحات موضوعية: Medicine, Science
الوصف: Abstract Non-Small cell lung cancer (NSCLC) is one of the most dangerous cancers, with 85% of all new lung cancer diagnoses and a 30–55% of recurrence rate after surgery. Thus, an accurate prediction of recurrence risk in NSCLC patients during diagnosis could be essential to drive targeted therapies preventing either overtreatment or undertreatment of cancer patients. The radiomic analysis of CT images has already shown great potential in solving this task; specifically, Convolutional Neural Networks (CNNs) have already been proposed providing good performances. Recently, Vision Transformers (ViTs) have been introduced, reaching comparable and even better performances than traditional CNNs in image classification. The aim of the proposed paper was to compare the performances of different state-of-the-art deep learning algorithms to predict cancer recurrence in NSCLC patients. In this work, using a public database of 144 patients, we implemented a transfer learning approach, involving different Transformers architectures like pre-trained ViTs, pre-trained Pyramid Vision Transformers, and pre-trained Swin Transformers to predict the recurrence of NSCLC patients from CT images, comparing their performances with state-of-the-art CNNs. Although, the best performances in this study are reached via CNNs with AUC, Accuracy, Sensitivity, Specificity, and Precision equal to 0.91, 0.89, 0.85, 0.90, and 0.78, respectively, Transformer architectures reach comparable ones with AUC, Accuracy, Sensitivity, Specificity, and Precision equal to 0.90, 0.86, 0.81, 0.89, and 0.75, respectively. Based on our preliminary experimental results, it appears that Transformers architectures do not add improvements in terms of predictive performance to the addressed problem.
نوع الوثيقة: article
وصف الملف: electronic resource
اللغة: English
تدمد: 2045-2322
العلاقة: https://doaj.org/toc/2045-2322Test
DOI: 10.1038/s41598-023-48004-9
الوصول الحر: https://doaj.org/article/355cb18e617c4a91bae039c14b60162dTest
رقم الانضمام: edsdoj.355cb18e617c4a91bae039c14b60162d
قاعدة البيانات: Directory of Open Access Journals
الوصف
تدمد:20452322
DOI:10.1038/s41598-023-48004-9