Lets say you have a measurement device of which you do not know the accuracy, and a reference measurement device. Both measure a variable $x$. The range of interest is $x_0<x < x_1$. How do you determine the accuracy of the unknown device in this range?
My course of action would be to gather values for both devices from $x_0$ to $x_1$ and building a distribution of errors. The accuracy could then be the error span, $\pm3\sigma$ or something similar - is this correct?
Assumptions:
- The reference measurement device is calibrated and has virtually no error