A good method for identifying mislabeled data is Confident Learning. It can use predictions from any trained classifier to automatically identify which data is incorrectly labeled. Since Confident Learning directly estimates which datapoints have label errors, it can also estimate what portion of these labels were incorrectly labeled by the curator. This method was previously used to discover tons of label errors in many major ML benchmark datasets.
Intuitively, a baseline solution could be flagging any example where the classifier's prediction differs from the given label. However this baseline performs poorly if the classifier makes mistakes (something typically inevitable in practice). Confident Learning also accounts for the classifier's confidence-level in each prediction and its propensity to predict certain classes (eg. some classifiers may incorrectly predict class A overly much due to a training shortcoming, especially in imbalanced settings like you are dealing with), in a theoretically principled way that ensures one can still identify most label errors even with an imperfect classifier.
Here is a Python implementation of this CL algorithm that I helped develop, which is easy to run on most types of classification data (image, text, tabular, audio,... as well as binary, multi-class, or multi-label settings).