Text-to-Code Generation with Modality-relative Pre-training

التفاصيل البيبلوغرافية
العنوان: Text-to-Code Generation with Modality-relative Pre-training
المؤلفون: Christopoulou, Fenia, Zhang, Guchun, Lampouras, Gerasimos
سنة النشر: 2024
المجموعة: Computer Science
مصطلحات موضوعية: Computer Science - Computation and Language
الوصف: Large pre-trained language models have recently been expanded and applied to programming language tasks with great success, often through further pre-training of a strictly-natural language model--where training sequences typically contain both natural and (linearised) programming language. Such approaches effectively map both modalities of the sequence into the same embedding space. However, programming language keywords (e.g. "while") often have very strictly defined semantics. As such, transfer learning from their natural language usage may not necessarily be beneficial to their code application and vise versa. Assuming an already pre-trained language model, in this work we investigate how sequence tokens can be adapted and represented differently, depending on which modality they belong to, and to the ultimate benefit of the downstream task. We experiment with separating embedding spaces between modalities during further model pre-training with modality-relative training objectives. We focus on text-to-code generation and observe consistent improvements across two backbone models and two test sets, measuring pass@$k$ and a novel incremental variation.
Comment: Accepted at EACL 2024. 15 pages, 5 figures, 6 tables
نوع الوثيقة: Working Paper
الوصول الحر: http://arxiv.org/abs/2402.05783Test
رقم الانضمام: edsarx.2402.05783
قاعدة البيانات: arXiv
ResultId 1
Header edsarx
arXiv
edsarx.2402.05783
1128
3
Report
report
1127.9111328125
PLink https://search.ebscohost.com/login.aspx?direct=true&site=eds-live&scope=site&db=edsarx&AN=edsarx.2402.05783&custid=s6537998&authtype=sso
FullText Array ( [Availability] => 0 )
Array ( [0] => Array ( [Url] => http://arxiv.org/abs/2402.05783 [Name] => EDS - Arxiv [Category] => fullText [Text] => View record in Arxiv [MouseOverText] => View record in Arxiv ) )
Items Array ( [Name] => Title [Label] => Title [Group] => Ti [Data] => Text-to-Code Generation with Modality-relative Pre-training )
Array ( [Name] => Author [Label] => Authors [Group] => Au [Data] => <searchLink fieldCode="AR" term="%22Christopoulou%2C+Fenia%22">Christopoulou, Fenia</searchLink><br /><searchLink fieldCode="AR" term="%22Zhang%2C+Guchun%22">Zhang, Guchun</searchLink><br /><searchLink fieldCode="AR" term="%22Lampouras%2C+Gerasimos%22">Lampouras, Gerasimos</searchLink> )
Array ( [Name] => DatePubCY [Label] => Publication Year [Group] => Date [Data] => 2024 )
Array ( [Name] => Subset [Label] => Collection [Group] => HoldingsInfo [Data] => Computer Science )
Array ( [Name] => Subject [Label] => Subject Terms [Group] => Su [Data] => <searchLink fieldCode="DE" term="%22Computer+Science+-+Computation+and+Language%22">Computer Science - Computation and Language</searchLink> )
Array ( [Name] => Abstract [Label] => Description [Group] => Ab [Data] => Large pre-trained language models have recently been expanded and applied to programming language tasks with great success, often through further pre-training of a strictly-natural language model--where training sequences typically contain both natural and (linearised) programming language. Such approaches effectively map both modalities of the sequence into the same embedding space. However, programming language keywords (e.g. "while") often have very strictly defined semantics. As such, transfer learning from their natural language usage may not necessarily be beneficial to their code application and vise versa. Assuming an already pre-trained language model, in this work we investigate how sequence tokens can be adapted and represented differently, depending on which modality they belong to, and to the ultimate benefit of the downstream task. We experiment with separating embedding spaces between modalities during further model pre-training with modality-relative training objectives. We focus on text-to-code generation and observe consistent improvements across two backbone models and two test sets, measuring pass@$k$ and a novel incremental variation.<br />Comment: Accepted at EACL 2024. 15 pages, 5 figures, 6 tables )
Array ( [Name] => TypeDocument [Label] => Document Type [Group] => TypDoc [Data] => Working Paper )
Array ( [Name] => URL [Label] => Access URL [Group] => URL [Data] => <link linkTarget="URL" linkTerm="http://arxiv.org/abs/2402.05783" linkWindow="_blank">http://arxiv.org/abs/2402.05783</link> )
Array ( [Name] => AN [Label] => Accession Number [Group] => ID [Data] => edsarx.2402.05783 )
RecordInfo Array ( [BibEntity] => Array ( [Subjects] => Array ( [0] => Array ( [SubjectFull] => Computer Science - Computation and Language [Type] => general ) ) [Titles] => Array ( [0] => Array ( [TitleFull] => Text-to-Code Generation with Modality-relative Pre-training [Type] => main ) ) ) [BibRelationships] => Array ( [HasContributorRelationships] => Array ( [0] => Array ( [PersonEntity] => Array ( [Name] => Array ( [NameFull] => Christopoulou, Fenia ) ) ) [1] => Array ( [PersonEntity] => Array ( [Name] => Array ( [NameFull] => Zhang, Guchun ) ) ) [2] => Array ( [PersonEntity] => Array ( [Name] => Array ( [NameFull] => Lampouras, Gerasimos ) ) ) ) [IsPartOfRelationships] => Array ( [0] => Array ( [BibEntity] => Array ( [Dates] => Array ( [0] => Array ( [D] => 08 [M] => 02 [Type] => published [Y] => 2024 ) ) ) ) ) ) )
IllustrationInfo