When class-conditional distributions are gaussian with equal covariance matrices, the optimal decision boundary is a hyperplane. This is the core concept behind Linear Discriminant Analysis (LDA).
For any data point $x$, the probability that $x$ comes from class $\omega_1$ is:
$P(x|\omega_1) \sim N(\mu_1,\Sigma) = (2\pi)^{-1}|\Sigma|^{-1/2}\exp\left\{ (-1/2) (x-\mu_1)'\Sigma^{-1}(x-\mu_1)
\right\}$
Similarly, $P(x|\omega_2) \sim N(\mu_2,\Sigma)$.
Let's denote the prior probability of class 1 and 2 as $P(\omega_1)$ and $P(\omega_2)$, respectively.
Equal Misclassification Cost:
If the cost of misclassification is equal, we want to assign new data points such that the probability of misclassification is minimized. This decision rule assigns points to class 1 when $x$ satisfies:
$P(x|\omega_1)P(\omega_1) > P(\omega_2)(P(x|\omega_2)$
We can establish a similar rule for assigning points to class 2.
o find the decision boundary, we need to find the values of $x$ that satisfy
$P(x|\omega_1)P(\omega_1) = P(x|\omega_2)P(\omega_2)$
The values of $x$ that satisfy this equality here lie on a line (or hyperplane for higher dimensions).
Unequal Misclassification Cost:
For the second question, you will need an additional term. Let's call $C(\omega_1)$ the cost of making an error when the data point was actually from class 1 and $C(\omega_2)$ the cost of making an error when the data point was from class 2.
To minimize the cost of an error, your new decision rule would be to assign $x$ to class 1 when $x$ satisfies:
$P(x|\omega_1)P(\omega_1)C(\omega_1) > P(x|\omega_2)P(\omega_2)C(\omega_2)$
As before, to find the decision boundary, you need to solve for $x$ when:
$P(x|\omega_1)P(\omega_1)C(\omega_1) = P(x|\omega_2)P(\omega_2)C(\omega_2)$
This decision boundary will still be a line / hyperplane, however it may have a different offset or orientation from the solution in the case of equal misclassification cost.