يعرض 1 - 10 نتائج من 35 نتيجة بحث عن '"Version identification"', وقت الاستعلام: 0.89s تنقيح النتائج
  1. 1
    رسالة جامعية

    المؤلفون: Salamon, Justin J.

    المساهمون: University/Department: Universitat Pompeu Fabra. Departament de Tecnologies de la Informació i les Comunicacions

    مرشدي الرسالة: Gómez Gutiérrez, Emilia, Serra, Xavier

    المصدر: TDX (Tesis Doctorals en Xarxa)

    مصطلحات موضوعية: Melody extraction, Predominant melody estimation, Fundamental frequency, Music information retrieval, Audio content processing, Pitch, Contour, Polyphonic, Music similarity, Version identification, Query by humming, Melody, Bass line, Harmony, Genre classification, Tonic identification, Indian classical music, Flamenco, Automatic music transcription, Melodic transcription, Evaluation methodology, Auditory scene analysis, Melodic contour, Music signal processing, Extracción de melodía, Estimación de la melodía predominante, Frecuencia fundamental, Recuperación de la información musical, Procesado de contenido de audio, Contorno tonal, Polifonía, Semejanza musical, Identificación de versiones, Búsqueda por tarareo, Melodía, Línea de bajo, Clasificación del estilo musical, Identificación de la tónica, Música clásica india, Transcripción automática, Transcripción melódica, Metodología de evaluación, Análisis de la escena auditiva, Contorno melódico, Procesado de señales musicales, Extracció de melodia, Estimació de la melodia predominant, Freqüència fonamental, Recuperació de la informació musical, Processament de contingut d'àudio, Contorn tonal, Polifonia, Semblança musical, Identificació de versions, Recerca per tarareo, Línia de baix, Harmonia, Classificació de l'estil musical, identificació de la tònica, Flamenc, Transcripció automàtica, Transcripció melòdica, Metodologia d'avaluació, Anàlisi de l'escena auditiva, Contorn melòdic, Processament de senyals musicals

    الوصف: Music was the first mass-market industry to be completely restructured by digital technology, and today we can have access to thousands of tracks stored locally on our smartphone and millions of tracks through cloud-based music services. Given the vast quantity of music at our fingertips, we now require novel ways of describing, indexing, searching and interacting with musical content. In this thesis we focus on a technology that opens the door to a wide range of such applications: automatically estimating the pitch sequence of the melody directly from the audio signal of a polyphonic music recording, also referred to as melody extraction. Whilst identifying the pitch of the melody is something human listeners can do quite well, doing this automatically is highly challenging. We present a novel method for melody extraction based on the tracking and characterisation of the pitch contours that form the melodic line of a piece. We show how different contour characteristics can be exploited in combination with auditory streaming cues to identify the melody out of all the pitch content in a music recording using both heuristic and model-based approaches. The performance of our method is assessed in an international evaluation campaign where it is shown to obtain state-of-the-art results. In fact, it achieves the highest mean overall accuracy obtained by any algorithm that has participated in the campaign to date. We demonstrate the applicability of our method both for research and end-user applications by developing systems that exploit the extracted melody pitch sequence for similarity-based music retrieval (version identification and query-by-humming), genre classification, automatic transcription and computational music analysis. The thesis also provides a comprehensive comparative analysis and review of the current state-of-the-art in melody extraction and a first of its kind analysis of melody extraction evaluation methodology.

    الوصف (مترجم): La industria de la música fue una de las primeras en verse completamente reestructurada por los avances de la tecnología digital, y hoy en día tenemos acceso a miles de canciones almacenadas en nuestros dispositivos móviles y a millones más a través de servicios en la nube. Dada esta inmensa cantidad de música al nuestro alcance, necesitamos nuevas maneras de describir, indexar, buscar e interactuar con el contenido musical. Esta tesis se centra en una tecnología que abre las puertas a nuevas aplicaciones en este área: la extracción automática de la melodía a partir de una grabación musical polifónica. Mientras que identificar la melodía de una pieza es algo que los humanos pueden hacer relativamente bien, hacerlo de forma automática presenta mucha complejidad, ya que requiere combinar conocimiento de procesado de señal, acústica, aprendizaje automático y percepción sonora. Esta tarea se conoce en el ámbito de investigación como “extracción de melodía”, y consiste técnicamente en estimar la secuencia de alturas correspondiente a la melodía predominante de una pieza musical a partir del análisis de la señal de audio. Esta tesis presenta un método innovador para la extracción de la melodía basado en el seguimiento y caracterización de contornos tonales. En la tesis, mostramos cómo se pueden explotar las características de contornos en combinación con reglas basadas en la percepción auditiva, para identificar la melodía a partir de todo el contenido tonal de una grabación, tanto de manera heurística como a través de modelos aprendidos automáticamente. A través de una iniciativa internacional de evaluación comparativa de algoritmos, comprobamos además que el método propuesto obtiene resultados punteros. De hecho, logra la precisión más alta de todos los algoritmos que han participado en la iniciativa hasta la fecha. Además, la tesis demuestra la utilidad de nuestro método en diversas aplicaciones tanto de investigación como para usuarios finales, desarrollando una serie de sistemas que aprovechan la melodía extraída para la búsqueda de música por semejanza (identificación de versiones y búsqueda por tarareo), la clasificación del estilo musical, la transcripción o conversión de audio a partitura, y el análisis musical con métodos computacionales. La tesis también incluye un amplio análisis comparativo del estado de la cuestión en extracción de melodía y el primer análisis crítico existente de la metodología de evaluación de algoritmos de este tipo
    La indústria musical va ser una de les primeres a veure's completament reestructurada pels avenços de la tecnologia digital, i avui en dia tenim accés a milers de cançons emmagatzemades als nostres dispositius mòbils i a milions més a través de serveis en xarxa. Al tenir aquesta immensa quantitat de música al nostre abast, necessitem noves maneres de descriure, indexar, buscar i interactuar amb el contingut musical. Aquesta tesi es centra en una tecnologia que obre les portes a noves aplicacions en aquesta àrea: l'extracció automàtica de la melodia a partir d'una gravació musical polifònica. Tot i que identificar la melodia d'una peça és quelcom que els humans podem fer relativament fàcilment, fer-ho de forma automàtica presenta una alta complexitat, ja que requereix combinar coneixement de processament del senyal, acústica, aprenentatge automàtic i percepció sonora. Aquesta tasca es coneix dins de l'àmbit d'investigació com a “extracció de melodia”, i consisteix tècnicament a estimar la seqüència de altures tonals corresponents a la melodia predominant d'una peça musical a partir de l'anàlisi del senyal d'àudio. Aquesta tesi presenta un mètode innovador per a l'extracció de la melodia basat en el seguiment i caracterització de contorns tonals. Per a fer-ho, mostrem com es poden explotar les característiques de contorns combinades amb regles basades en la percepció auditiva per a identificar la melodia a partir de tot el contingut tonal d'una gravació, tant de manera heurística com a través de models apresos automàticament. A més d'això, comprovem a través d'una iniciativa internacional d'avaluació comparativa d'algoritmes que el mètode proposat obté resultats punters. De fet, obté la precisió més alta de tots els algoritmes proposats fins la data d'avui. A demés, la tesi demostra la utilitat del mètode en diverses aplicacions tant d'investigació com per a usuaris finals, desenvolupant una sèrie de sistemes que aprofiten la melodia extreta per a la cerca de música per semblança (identificació de versions i cerca per taral•larà), la classificació de l'estil musical, la transcripció o conversió d'àudio a partitura, i l'anàlisi musical amb mètodes computacionals. La tesi també inclou una àmplia anàlisi comparativa de l'estat de l'art en extracció de melodia i la primera anàlisi crítica existent de la metodologia d'avaluació d'algoritmes d'aquesta mena.
    Programa de doctorat en Tecnologies de la Informació i les Comunicacions

    وصف الملف: application/pdf

  2. 2
    دورية أكاديمية

    المصدر: Jisuanji kexue, Vol 50, Iss 1, Pp 373-379 (2023)

    الوصف: The widespread use of software public component libraries increases the speed of software development while expanding the attack surface of software.Vulnerabilities that exist in public component libraries are widely distributed in software that uses the library files,and the compatibility,stability,and development delays make it difficult to fix such vulnerabilities and the patching period is long.Software component analysis is an important tool to solve such problems,but limited by the problem of ineffective feature selection and difficulties in extracting accurate features from public component libraries,the accuracy of component analysis is not high and generally stays at the level of kind location.In this paper,we propose a public component library feature extraction method based on cross-fingerprint analysis,build a fingerprint library based on 25 000 open source projects on GitHub platform,propose source string role classification,export function fingerprint analysis,binary compilation fingerprint analysis,etc.to extract cross-fingerprints of component libraries,realize the accurate localization of public component libraries,develop a prototype tool LVRecognizer,test and evaluate 516 real softwares,and obtain a accuracy rate of 94.74%.

    وصف الملف: electronic resource

  3. 3
    دورية أكاديمية

    المصدر: Browse all Theses and Dissertations

    الوصف: Identifying the version of the Solidity compiler used to create an Ethereum contract is a challenging task, especially when the contract bytecode is obfuscated and lacks explicit metadata. Ethereum bytecode is highly complex, as it is generated by the Solidity compiler, which translates high-level programming constructs into low-level, stack-based code. Additionally, the Solidity compiler undergoes frequent updates and modifications, resulting in continuous evolution of bytecode patterns. To address this challenge, we propose using deep learning models to analyze Ethereum bytecodes and infer the compiler version that produced them. A large number of Ethereum contracts and the corresponding compiler versions is used to train these models. The dataset includes contracts compiled with various versions of the Solidity compiler. We preprocess the dataset to extract opcode sequences from the bytecode, which serve as inputs for the deep learning models. We use the advanced sequence learning methods such as bidirectional long short-term memory (Bi-LSTM), convolutional neural network (CNN), CNN+Bi-LSTM, Transformer, and Sentence BERT (SBERT) to capture the semantics of the opcode sequences. We analyze each model’s performance using metrics such as accuracy, precision, recall, and F1-score. Our results demonstrate that our developed models excel at identifying the Solidity compiler version used in smart contracts with high accuracy. We also compare our methods with non-sequence learning models, showing that our models outperform them in most cases. This highlights the advantages of our proposed approaches for identifying Solidity compiler versions from Ethereum bytecodes.

    وصف الملف: application/pdf

  4. 4
    دورية أكاديمية

    المؤلفون: Frank Zalkow, Meinard Müller

    المصدر: Applied Sciences; Volume 10; Issue 1; Pages: 19

    جغرافية الموضوع: agris

    الوصف: Cross-version music retrieval aims at identifying all versions of a given piece of music using a short query audio fragment. One previous approach, which is particularly suited for Western classical music, is based on a nearest neighbor search using short sequences of chroma features, also referred to as audio shingles. From the viewpoint of efficiency, indexing and dimensionality reduction are important aspects. In this paper, we extend previous work by adapting two embedding techniques; one is based on classical principle component analysis, and the other is based on neural networks with triplet loss. Furthermore, we report on systematically conducted experiments with Western classical music recordings and discuss the trade-off between retrieval quality and embedding dimensionality. As one main result, we show that, using neural networks, one can reduce the audio shingles from 240 to fewer than 8 dimensions with only a moderate loss in retrieval accuracy. In addition, we present extended experiments with databases of different sizes and different query lengths to test the scalability and generalizability of the dimensionality reduction methods. We also provide a more detailed view into the retrieval problem by analyzing the distances that appear in the nearest neighbor search.

    وصف الملف: application/pdf

    العلاقة: Acoustics and Vibrations; https://dx.doi.org/10.3390/app10010019Test

  5. 5
    دورية أكاديمية

    المؤلفون: Bhatt, Manish

    المصدر: University of New Orleans Theses and Dissertations

    الوصف: In this paper, we present a working research prototype codeid-elf for ELF binaries based on its Windows counterpart codeid, which can identify kernels through relocation entries extracted from the binaries. We show that relocation-based signatures are unique and distinct and thus, can be used to accurately determine Linux kernel versions and derandomize the base address of the kernel in memory (when kernel Address Space Layout Randomization is enabled). We evaluate the effectiveness of codeid-elf on a subset of Linux kernels and find that the relocations in kernel code have nearly 100\% code coverage and low similarity (uniqueness) across various kernels. Finally, we show that codeid-elf, which leverages relocations in kernel code, can detect all kernel versions in the test set with almost 100% page hit rate and nearly zero false negatives.

    وصف الملف: application/pdf

  6. 6

    المصدر: Proceedings of the Fifteenth ACM International Conference on Web Search and Data Mining

    الوصف: Comunicació presentada a 15th ACM International Conference on Web Search and Data Mining (WSDM 2022), celebrat del 21 al 25 de febrer de 2022 de manera virtual. Version identification (VI) systems now offer accurate and scalable solutions for detecting different renditions of a musical composition, allowing the use of these systems in industrial applications and throughout the wider music ecosystem. Such use can have an important impact on various stakeholders regarding recognition and financial benefits, including how royalties are circulated for digital rights management. In this work, we take a step toward acknowledging this impact and consider VI systems as socio-technical systems rather than isolated technologies. We propose a framework for quantifying performance disparities across 5 systems and 6 relevant side attributes: gender, popularity, country, language, year, and prevalence. We also consider 3 main stakeholders for this particular information retrieval use case: the performing artists of query tracks, those of reference (original) tracks, and the composers. By categorizing the recordings in our dataset using such attributes and stakeholders, we analyze whether the considered VI systems show any implicit biases. We find signs of disparities in identification performance for most of the groups we include in our analyses. We also find that learning- and rule-based systems behave differently for some attributes, which suggests an additional dimension to consider along with accuracy and scalability when evaluating VI systems. Lastly, we share our dataset to encourage VI researchers to take these aspects into account while building new systems. This work is supported by the MIP-Frontiers project, the European Union’s Horizon 2020 research and innovation programme under the Marie Skłodowska-Curie grant agreement No. 765068.

    وصف الملف: application/pdf

  7. 7

    المصدر: ICASSP 2021-2021 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)
    ICASSP

    الوصف: Comunicació presentada a 2021 IEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP 2021), celebrat del 6 a l'11 de juny de 2021 de manera virtual. The setlist identification (SLI) task addresses a music recognition use case where the goal is to retrieve the metadata and timestamps for all the tracks played in live music events. Due to various musical and non-musical changes in live performances, developing automatic SLI systems is still a challenging task that, despite its industrial relevance, has been under-explored in the academic literature. In this paper, we propose an end-to-end workflow that identifies relevant metadata and timestamps of live music performances using a version identification system. We compare 3 of such systems to investigate their suitability for this particular task. For developing and evaluating SLI systems, we also contribute a new dataset that contains 99.5 h of concerts with annotated metadata and timestamps, along with the corresponding reference set. The dataset is categorized by audio qualities and genres to analyze the performance of SLI systems in different use cases. Our approach can identify 68% of the annotated segments, with values ranging from 35% to 77% based on the genre. Finally, we evaluate our approach against a database of 56.8 k songs to illustrate the effect of expanding the reference set, where we can still identify 56% of the annotated segments. This work is supported by the MIP-Frontiers project, the European Union’s Horizon 2020 research and innovation programme under the Marie Skłodowska-Curie grant agreement No. 765068.

    وصف الملف: application/pdf

  8. 8
    دورية أكاديمية

    المساهمون: Consejo Superior de Investigaciones Científicas (España), Ministerio de Educación y Ciencia (España), European Commission, Generalitat de Catalunya

    الوصف: In this study we compare the use of different music representations for retrieving alternative performances of the same musical piece, a task commonly referred to as version identification. Given the audio signal of a song, we compute descriptors representing its melody, bass line and harmonic progression using state-of-the-art algorithms. These descriptors are then employed to retrieve different versions of the same musical piece using a dynamic programming algorithm based on nonlinear time series analysis. First, we evaluate the accuracy obtained using individual descriptors, and then we examine whether performance can be improved by combining these music representations (i.e. descriptor fusion). Our results show that whilst harmony is the most reliable music representation for version identification, the melody and bass line representations also carry useful information for this task. Furthermore, we show that by combining these tonal representations we can increase version detection accuracy. Finally, we demonstrate how the proposed version identification method can be adapted for the task of query-by-humming. We propose a melody-based retrieval approach, and demonstrate how melody representations extracted from recordings of a cappella singing can be successfully used to retrieve the original song from a collection of polyphonic audio. The current limitations of the proposed approach are discussed in the context of version identification and query-by-humming, and possible solutions and future research directions are proposed. ; This research was funded by Programa de Formación del Profesorado Universitario (FPU) of the Ministerio de Educación de España, Consejo Superior de Investigaciones Científicas (JAEDOC069/2010), Generalitat de Catalunya (2009-SGR-1434) and the European Commission, FP7 (Seventh Framework Programme), ICT-2011.1.5 Networked Media and Search Systems, grant agreement No. 287711. ; Peer Reviewed

    العلاقة: #PLACEHOLDER_PARENT_METADATA_VALUE#; info:eu-repo/grantAgreement/EC/FP7/287711; Preprint; Sí; International Journal of Multimedia Information Retrieval 2 (1): 45- 58 (2013); http://hdl.handle.net/10261/133865Test; http://dx.doi.org/10.13039/501100003339Test; http://dx.doi.org/10.13039/501100000780Test; http://dx.doi.org/10.13039/501100002809Test

  9. 9

    المؤلفون: Meinard Müller, Frank Zalkow

    المصدر: Applied Sciences
    Volume 10
    Issue 1

    الوصف: Cross-version music retrieval aims at identifying all versions of a given piece of music using a short query audio fragment. One previous approach, which is particularly suited for Western classical music, is based on a nearest neighbor search using short sequences of chroma features, also referred to as audio shingles. From the viewpoint of efficiency, indexing and dimensionality reduction are important aspects. In this paper, we extend previous work by adapting two embedding techniques
    one is based on classical principle component analysis, and the other is based on neural networks with triplet loss. Furthermore, we report on systematically conducted experiments with Western classical music recordings and discuss the trade-off between retrieval quality and embedding dimensionality. As one main result, we show that, using neural networks, one can reduce the audio shingles from 240 to fewer than 8 dimensions with only a moderate loss in retrieval accuracy. In addition, we present extended experiments with databases of different sizes and different query lengths to test the scalability and generalizability of the dimensionality reduction methods. We also provide a more detailed view into the retrieval problem by analyzing the distances that appear in the nearest neighbor search.

    وصف الملف: application/pdf

  10. 10
    دورية أكاديمية

    المؤلفون: Emilia Gómez

    المساهمون: The Pennsylvania State University CiteSeerX Archives

    الوصف: Identifying versions of the same song by means of automatically extracted audio features is a complex task for a music information retrieval system, even though it may seem very simple for a human listener. The design of a system to perform this task gives the opportunity to analyze which features are relevant for music similarity. This paper focuses on the analysis of tonal similarity and its application to the identification of different versions of the same piece. This work formulates the situations where a song is versioned and several musical aspects are transformed with respect to the canonical version. A quantitative evaluation is made using tonal descriptors, including chroma representations and tonality. A simple similarity measure, based on Dynamic Time Warping over transposed chroma features, yields around 55 % accuracy, which exceeds by far the expected random baseline rate.

    وصف الملف: application/pdf