In logistic regression, you're assuming $p(\text{class } y\!=\!1|x) = \sigma(w^\top\!x+b)$ for some parameters $w$, $b$ that can be estimated from the data. Typically, a point $x$ is assigned to class $y\!=\!1$ If $\sigma(w^\top\!x+b)>0.5$, otherwise $y\!=\!0$.
The decision boundary are those $x$ where $\sigma(w^\top\!x+b)=0.5$, or simply $w^\top\!x+b =0$ (a hyper-plane orthogonal to $w$, with $w^\top\!x=-b$).
Given some $x$ that the model labels $y\!=\!1$ (i.e. $w^\top\!x+b >0$), it is projected to the closest point $x'$ on the decision boundary, by subtracting some amount ($\lambda$) of the vector $w$, since that is orthogonal to the decision boundary. That is, $\lambda$ satisfies $0=w^\top\!x'+b =w^\top\!(x\!-\!\lambda w)+b$.
Rearranging, gives $\lambda = \tfrac{w^\top\!x+b}{w^\top w} \ $, so $\ x' = x\!-\!\tfrac{w^\top\!x+b}{w^\top w} w\ $ is the projection of $x$ on the decision boundary. Picking a larger $\lambda$ gives a point past the decision boundary, which is assigned class $y\!=\!0$.
(Note that $\lambda = \tfrac{w^\top\!x+b}{w^\top w}$ must be positive since $w^\top\!x+b$ is positive for the given $x$ and $w^\top\!w$ is always positive).
So, for $x$ labelled $y\!=\!1$, the closest point $x^*$ labelled $y\!=\!0$ (by Euclidean distance) is $\ x^* = x-\lambda^* w\ $, where $\ \lambda^*=\tfrac{w^\top\!x+b}{w^\top w}+\delta\ $, and $\delta\!>\!0$ is small.