دورية أكاديمية

Using Transfer Learning for Code-Related Tasks

التفاصيل البيبلوغرافية
العنوان: Using Transfer Learning for Code-Related Tasks
المؤلفون: Mastropaolo, Antonio, Cooper, Nathan, Palacio, David Nader, Scalabrino, Simone, Poshyvanyk, Denys, Oliveto, Rocco, Bavota, Gabriele
المساهمون: Mastropaolo, Antonio, Cooper, Nathan, Palacio, David Nader, Scalabrino, Simone, Poshyvanyk, Deny, Oliveto, Rocco, Bavota, Gabriele
سنة النشر: 2022
المجموعة: Università degli Studi del Molise: IRIS
مصطلحات موضوعية: Codes, Computer bugs, Deep learning, Electronic mail, empirical software engineering, Java, Multitasking, Natural language processing, Task analysis
الوصف: Deep learning (DL) techniques have been used to support several code-related tasks such as code summarization and bug-fixing. In particular, pre-trained transformer models are on the rise, also thanks to the excellent results they achieved in Natural Language Processing (NLP) tasks. The basic idea behind these models is to first pre-train them on a generic dataset using a self-supervised task (filling masked words in sentences). Then, these models are fine-tuned to support specific tasks of interest (language translation). A single model can be fine-tuned to support multiple tasks, possibly exploiting the benefits of . This means that knowledge acquired to solve a specific task (language translation) can be useful to boost performance on another task (sentiment classification). While the benefits of transfer learning have been widely studied in NLP, limited empirical evidence is available when it comes to code-related tasks. In this paper, we assess the performance of the Text-To-Text Transfer Transformer (T5) model in supporting four different code-related tasks: (i) automatic bug-fixing, (ii) injection of code mutants, (iii) generation of assert statements, and (iv) code summarization. We pay particular attention in studying the role played by pre-training and multi-task fine-tuning on the model’s performance. We show that (i) the T5 can achieve better performance as compared to state-of-the-art baselines; and (ii) while pre-training helps the model, not all tasks benefit from a multi-task fine-tuning. IEEE
نوع الوثيقة: article in journal/newspaper
اللغة: English
العلاقة: firstpage:1; lastpage:20; numberofpages:20; journal:IEEE TRANSACTIONS ON SOFTWARE ENGINEERING; https://hdl.handle.net/11695/117111Test; info:eu-repo/semantics/altIdentifier/scopus/2-s2.0-85141383902; https://ieeexplore.ieee.org/abstract/document/9797060Test
DOI: 10.1109/TSE.2022.3183297
الإتاحة: https://doi.org/10.1109/TSE.2022.3183297Test
https://hdl.handle.net/11695/117111Test
https://ieeexplore.ieee.org/abstract/document/9797060Test
رقم الانضمام: edsbas.2A710AF5
قاعدة البيانات: BASE