MOSA: Music Motion with Semantic Annotation Dataset for Cross-Modal Music Processing

التفاصيل البيبلوغرافية
العنوان: MOSA: Music Motion with Semantic Annotation Dataset for Cross-Modal Music Processing
المؤلفون: Huang, Yu-Fen, Moran, Nikki, Coleman, Simon, Kelly, Jon, Wei, Shun-Hwa, Chen, Po-Yin, Huang, Yun-Hsin, Chen, Tsung-Ping, Kuo, Yu-Chia, Wei, Yu-Chi, Li, Chih-Hsuan, Huang, Da-Yu, Kao, Hsuan-Kai, Lin, Ting-Wei, Su, Li
سنة النشر: 2024
المجموعة: Computer Science
مصطلحات موضوعية: Computer Science - Sound, Computer Science - Artificial Intelligence, Electrical Engineering and Systems Science - Audio and Speech Processing
الوصف: In cross-modal music processing, translation between visual, auditory, and semantic content opens up new possibilities as well as challenges. The construction of such a transformative scheme depends upon a benchmark corpus with a comprehensive data infrastructure. In particular, the assembly of a large-scale cross-modal dataset presents major challenges. In this paper, we present the MOSA (Music mOtion with Semantic Annotation) dataset, which contains high quality 3-D motion capture data, aligned audio recordings, and note-by-note semantic annotations of pitch, beat, phrase, dynamic, articulation, and harmony for 742 professional music performances by 23 professional musicians, comprising more than 30 hours and 570 K notes of data. To our knowledge, this is the largest cross-modal music dataset with note-level annotations to date. To demonstrate the usage of the MOSA dataset, we present several innovative cross-modal music information retrieval (MIR) and musical content generation tasks, including the detection of beats, downbeats, phrase, and expressive contents from audio, video and motion data, and the generation of musicians' body motion from given music audio. The dataset and codes are available alongside this publication (https://github.com/yufenhuang/MOSA-Music-mOtion-and-Semantic-Annotation-datasetTest).
Comment: IEEE/ACM Transactions on Audio, Speech, and Language Processing, 2024. 14 pages, 7 figures. Dataset is available on: https://github.com/yufenhuang/MOSA-Music-mOtion-and-Semantic-Annotation-dataset/tree/mainTest and https://zenodo.org/records/11393449Test
نوع الوثيقة: Working Paper
DOI: 10.1109/TASLP.2024.3407529
الوصول الحر: http://arxiv.org/abs/2406.06375Test
رقم الانضمام: edsarx.2406.06375
قاعدة البيانات: arXiv