MERT: Acoustic Music Understanding Model with Large-Scale Self-supervised TrainingCCF none

08/2022 – 05/2023

Supervised by Dr Emmanouil Benetos, Centre for Digital Music, Queen Mary University of London

  • Built self-supervised learning systems, acquiring 50k+ downloading of checkpoints on Huggingface.
  • Replaced the pseudo-tag from MFCCs to Chroma music features for harmonic information.
  • Utilising deep features like Encodec instead of k-means for scaling up models to 1 B parameters.
Avatar
马英浩 (Nicolaus) MA Yinghao
PhD Student in AI & Music

MA Yinghao, PhD student in C4DM, QMUL. Research interests include music information retireval, self-supervised learning, music-related multimodal machine learning, and audio signal processing and matter.

Related