Artificial intelligence in echocardiography - Steps to automatic cardiac measurements in routine practice

التفاصيل البيبلوغرافية
العنوان: Artificial intelligence in echocardiography - Steps to automatic cardiac measurements in routine practice
المؤلفون: Karuzas, A, Sablauskas, K, Skrodenis, L, Verikas, D, Rumbinaite, E, Zaliaduonyte-Peksiene, D, Ziuteliene, K, Vaškelytė, JJ, Jurkevicius, R, Plisiene, J
المصدر: European heart journal: ESC Congress 2019 365 : Paris, France, 31 August-4 September 2019 / European Society of Cardiology, Oxford : Oxford University Press, 2019, vol. 40, suppl. 1, October, p. 773, no. P1465 ; ISSN 0195-668X ; eISSN 1522-9645
سنة النشر: 1487
المجموعة: LSRC VL (Lithuanian Social Research Centre Virtual Library) / LSTC VB (Lietuvos socialinių tyrimų centras virtualią biblioteką)
مصطلحات موضوعية: Echocardiography, Artificial intelligence, info:eu-repo/classification/udc/616.12-073.432.41
الوصف: INTRODUCTION The growth of artificial intelligence (AI) use in echocardiography over the past years has been exponential, proposing new paths to overcome inter-operator variability and experience of the operator. Even though the applications of AI are still in their infancy within the field of echocardiography, the potential of AI implies future directions and is eager to assist for accuracy and efficiency of manual tracings. Deep learning, a subset of machine learning algorithms, is gaining popularity in echocardiography as a state of the art in visual data analysis. PURPOSE To evaluate deep learning for two initial tasks in automated cardiac measurements: view recognition and end-systolic (ES) and end-diastolic (ED) frame detection. METHODS A total of 230 patients’ (with various indications for study) 2D echocardiography data was used to train and validate neural networks. Raw pixel data was extracted from EPIQ 7G , Vivid E95 and Vivid 7 imaging platforms. Images were labeled according to their view: parasternal long axis (PLA), basal short axis, short axis at mitral level, apical two, three and four chambers (A4C). Additionally, ES and ED frames were labeled for A4C and PLA views. Images were de-identified by applying black pixel masks to non-anatomical data and removing metadata. Convolutional Neural Network (CNN) architecture was used for the classification of 6 different views. A total of 34752 and 3972 (5792 and 662 per view) frames were used to train and validate the network, respectively. Long-term Recurrent Convolutional Network (LRCN) combining temporal and spatial cognition was used for ES and ED frame detection. A total of 195 and 35 sequences with a length of 92 frames were used to train the LRCN. RESULTS CNN for view classification had an AUC of 0.95 (sensitivity 95%, specificity 97%). Accuracy was lower for visually similar views, namely apical three-chamber and apical two-chamber. Training for ES
نوع الوثيقة: conference object
اللغة: English
العلاقة: http://lsmu.lvb.lt/LSMU:ELABAPDB41321983&prefLang=en_USTest
الإتاحة: https://doi.org/10.1093/eurheartj/ehz748.0230Test
http://lsmu.lvb.lt/LSMU:ELABAPDB41321983&prefLang=en_USTest
حقوق: info:eu-repo/semantics/embargoedAccess
رقم الانضمام: edsbas.42DDD97A
قاعدة البيانات: BASE