The MDS (multidimensional scaling) method is used to solve the problem of dimensionality reduction. Basically, it does the following: given $n$ points $x_1,\cdots,x_n\in\mathbb R^d$, try to find a smaller $d'$ and points $y_1,\cdots,y_n\in\mathbb R^{d'}$ such that $$|x_i-x_j|=|y_i-y_j|\quad\quad\quad(1)$$
The reasoning goes like this: Let $D=(d_{ij})$, where $d_{ij}=|x_i-x_j|$, be the distance matrix of $\{x_i\}$. If a set of points $\{y_i\}$ is such that
$\{y_i\}$ are centered at $0$, i.e. $\sum_{i=1}^ny_i=0$;
$\{y_i\}$ satisfies the condition (1) above,
then the matrix $Y$, whose columns are $\{y_i\}$, satisfies $Y^TY=B=(b_{ij})$, where $b_{ij}$ can be determined from $D=(d_{ij})$. In other words, we only need to solve for $Y$ the matrix equation $B=Y^TY$, where $B$ can be considered known.
After this, most textbooks and online material would say: "we see that if $B=V\Lambda V^T$ is an eigenvalue decomposition of $B$, then $\Lambda^{1/2}V^T$ is a solution for $Y^TY=B$, and we can take the columns of $\Lambda^{1/2}V^T$ for our choice of $\{y_i\}$.
Questions:
How can we guarantee that $\sum_{i=1}^ny_i=0$ if we take $Y=\Lambda^{1/2}V^T$?
What if $B$ has negative eigenvalues? How can we take $\Lambda^{1/2}$ in that case?