MERT: Acoustic Music Understanding Model with Large-Scale Self-supervised TrainingCCF none
08/2022 – 05/2023
Supervised by Dr Emmanouil Benetos, Centre for Digital Music, Queen Mary University of London
- Built self-supervised learning systems, acquiring 50k+ downloading of checkpoints on Huggingface.
- Replaced the pseudo-tag from MFCCs to Chroma music features for harmonic information.
- Utilising deep features like Encodec instead of k-means for scaling up models to 1 B parameters.