Annotation quality is very low, not usable for training
title describes itself
I disagree with that characterization. GTSinger has already been widely used in both academia and industry for singing and music model training, so it is simply not accurate to call it "not usable for training." At least for the Chinese and English portions, the lyric annotations, score annotations, and MFA alignments have generally been regarded as reasonably reliable. For other languages, only the Korean and Italian subsets do not have the same level of fine-grained annotation, and we have never tried to hide that limitation.
More importantly, this dataset was not assembled casually. We involved dozens of university students in a careful manual verification and correction process, and a substantial amount of human effort went into improving the annotation quality.
If you believe there are serious annotation problems, please point them out with concrete evidence and specific metrics: which subset, which type of annotation, what error rate, and what reproducible impact on training results. Simply saying that the annotation quality is "very low" and "not usable for training" is not a serious or professional assessment.