دورية أكاديمية

Is ChatGPT a trusted source of information for total hip and knee arthroplasty patients?

التفاصيل البيبلوغرافية
العنوان: Is ChatGPT a trusted source of information for total hip and knee arthroplasty patients?
المؤلفون: Benjamin M. Wright, Michael S. Bodnar, Andrew D. Moore, Meghan C. Maseda, Michael P. Kucharik, Connor C. Diaz, Christian M. Schmidt, Hassan R. Mir
المصدر: Bone & Joint Open, Vol 5, Iss 2, Pp 139-146 (2024)
بيانات النشر: The British Editorial Society of Bone & Joint Surgery, 2024.
سنة النشر: 2024
المجموعة: LCC:Orthopedic surgery
مصطلحات موضوعية: chatgpt, total hip arthroplasty, total knee arthroplasty, patient questions, accuracy, readability, total hip and knee arthroplasty, total knee arthroplasty (tka), knee arthroplasty, hip, resident physicians, bone tumours, physicians, paediatric orthopaedics, distal radius fractures, t-tests, Orthopedic surgery, RD701-811
الوصف: Aims: While internet search engines have been the primary information source for patients’ questions, artificial intelligence large language models like ChatGPT are trending towards becoming the new primary source. The purpose of this study was to determine if ChatGPT can answer patient questions about total hip (THA) and knee arthroplasty (TKA) with consistent accuracy, comprehensiveness, and easy readability. Methods: We posed the 20 most Google-searched questions about THA and TKA, plus ten additional postoperative questions, to ChatGPT. Each question was asked twice to evaluate for consistency in quality. Following each response, we responded with, “Please explain so it is easier to understand,” to evaluate ChatGPT’s ability to reduce response reading grade level, measured as Flesch-Kincaid Grade Level (FKGL). Five resident physicians rated the 120 responses on 1 to 5 accuracy and comprehensiveness scales. Additionally, they answered a “yes” or “no” question regarding acceptability. Mean scores were calculated for each question, and responses were deemed acceptable if ≥ four raters answered “yes.” Results: The mean accuracy and comprehensiveness scores were 4.26 (95% confidence interval (CI) 4.19 to 4.33) and 3.79 (95% CI 3.69 to 3.89), respectively. Out of all the responses, 59.2% (71/120; 95% CI 50.0% to 67.7%) were acceptable. ChatGPT was consistent when asked the same question twice, giving no significant difference in accuracy (t = 0.821; p = 0.415), comprehensiveness (t = 1.387; p = 0.171), acceptability (χ2 = 1.832; p = 0.176), and FKGL (t = 0.264; p = 0.793). There was a significantly lower FKGL (t = 2.204; p = 0.029) for easier responses (11.14; 95% CI 10.57 to 11.71) than original responses (12.15; 95% CI 11.45 to 12.85). Conclusion: ChatGPT answered THA and TKA patient questions with accuracy comparable to previous reports of websites, with adequate comprehensiveness, but with limited acceptability as the sole information source. ChatGPT has potential for answering patient questions about THA and TKA, but needs improvement. Cite this article: Bone Jt Open 2024;5(2):139–146.
نوع الوثيقة: article
وصف الملف: electronic resource
اللغة: English
تدمد: 2633-1462
العلاقة: https://doaj.org/toc/2633-1462Test
DOI: 10.1302/2633-1462.52.BJO-2023-0113.R1
الوصول الحر: https://doaj.org/article/5d5d37289639425cbed29e730d8415d4Test
رقم الانضمام: edsdoj.5d5d37289639425cbed29e730d8415d4
قاعدة البيانات: Directory of Open Access Journals
الوصف
تدمد:26331462
DOI:10.1302/2633-1462.52.BJO-2023-0113.R1