Given multivariate data split into several subsamples (classes) the analysis finds linear combinations of variables, called discriminant functions, which discriminate between classes and are uncorrelated. The functions are applied then to assign old or new observations to the classes. Discriminant analysis is both dimensionality reduction and classification technique.
Questions tagged [discriminant-analysis]
21 questions
6
votes
1 answer
Bayes Optimal Decision Boundaries for Gaussian Data with Equal Covariance
I am drawing samples from two classes in the two-dimensional Cartesian space, each of which has the same covariance matrix $[2, 0; 0, 2]$. One class has a mean of $[1.5, 1]$ and the other has a mean of $[1, 1.5]$. If the priors are $4/7$ for the…
John
- 101
- 1
- 2
6
votes
2 answers
Varying results when calculating scatter matrices for LDA
I'm following a Linear Discriminant Analysis tutorial from here for dimensionality reduction. After working through the tutorial (did the PCA part, too), I shortened the code using sklearn modules where applicable and verified it on the Iris data…
fukiburi
- 165
- 1
- 5
6
votes
1 answer
How are Hyperplane Heatmaps created and how should they be interpreted?
For nonlinear data, when we are using Support Vector Machines, we can use kernels such as Gaussian RBF, Polynomial, etc to achieve linearity in a different (potentially unknown to us) feature space and the algorithm learns a maximal separating…
Ragnar
- 511
- 1
- 5
- 16
2
votes
1 answer
Difference between FDA and LDA
I have asked this question in Mathematics Stackexchange, thought however that it might be more fit for here:
I am currently taking a Data-Analysis course and I learned about both the terms LDA (Linear Discriminant Analysis) and FDA (Fisher's…
Nestroy
- 21
- 2
2
votes
1 answer
Linear discriminant analysis in R: how to choose the most suitable model?
The data set vaso in the robustbase library summarizes the vasoconstriction (or not) of subjects’ fingers along with their breathing volumes and rates.
> head(vaso)
Volume Rate Y
1 3.70 0.825 1
2 3.50 1.090 1
3 1.25 2.500 1
4 0.75 1.500…
Helen
- 123
- 4
2
votes
0 answers
What is the favored discriminant analysis package in R?
I have been using the LDA package for R, but it is missing quite a few features especially those that can assess the output.
Are the any preferred packages that have some of the following?
Univariate Test Statistics
Canonical…
Alex
- 46
- 2
1
vote
1 answer
Help with DDP Mining algorithm for Effective Classification of data sets from 2 groups
I'm trying to implement the DDPmine algorithm from this article as part of some project, and I do not understand where in the algorithm we use the Class Label of each transaction?
We have transactions from 2 different groups spouse group has a class…
Eli Zatlawy
- 111
- 2
1
vote
0 answers
Iterative Reweighted Least Squares in python
I am trying to manually implement the irls logistic regression (Chapter 4.3.3 in Bishop - Pattern Recognition And Machine Learning) in python.
For updating the weights, I am using $w' = w-(\Phi^TR\Phi)^{-1}\Phi^T(y-t)$
However I am not getting…
adriankroeger
- 11
- 2
1
vote
1 answer
Data discrimination after clustering
My task consists of two points:
1) Make data clustering;
2) Assign new data to the resulting clusters;
I wanted to highlight the boundaries of clusters as min/max values for each coordinate of an observation belonging to the cluster, then assign…
bvl
- 87
- 3
1
vote
2 answers
What does $\mathbf{w^Tx}+w_0$ graphically mean in the discriminant function?
I found a post explaining the discriminant function very detailed. But I am still confused about the function $g(\mathbf{x})=\mathbf{w^Tx}+w_0$ in 9.2 Linear Discriminant Functions and Decision Surfaces. What does it represent graphically? Could…
user8314628
- 165
- 5
1
vote
1 answer
Linear Discriminant - Least Squares Classification Bishop 4.1.3
Pls. refer section 4.1.3 in Pattern Recognition - Bishop: "Least squares for Classification":
In a 2 class Linear Discriminat system, we classified vector $\mathbf{x}$ as $\mathcal{C}_1$ if y($\bf{x}$)>0, and $\mathcal{C}_2$ otherwise.
Generalizing…
Continue2Learn
- 172
- 1
- 9
1
vote
1 answer
Pattern Recognition, Bishop - (Linear) Discriminant Functions 4.1
Please refer "Pattern Recognition and Machine Learning" - Bishop, page 182.
I am struggling to visualize the intuition behind equations 4.6 & 4.7. I am presenting my understanding of section 4.1.1 using the diagram:
Pls. Note: I have used…
Continue2Learn
- 172
- 1
- 9
1
vote
1 answer
Prove GDA decision boundary is linear
My attempt:
(a) I solved that $a=\ln{\frac{P(X|C_0)P(C_0)}{P(X|C_1)P(C_1)}}$
(b) Here is where I'm running into trouble. I'm plugging the distributions into $\ln{\frac{P(X|C_0)P(C_0)}{P(X|C_1)P(C_1)}}$ and I get…
IrCa
- 163
- 6
1
vote
1 answer
Naive Bayes Classifier - Discriminant Function
To classify my samples, I decided to use Naive Bayes classifier, but I coded it, not used built-in library functions.
If I use this equality, I obtain nice classification accuracy: p1(x) > p2(x) => x belongs to C1
However, I could not understand why…
Goktug
- 113
- 4
1
vote
1 answer
Convert a pdf into a conditional pdf such that mean increases and std dev falls
Let success metric(for some business use case I am working on) be a continuous random variable S.
The mean of pdf defined on S indicates the chance of success. Higher the mean more is the chance of success. Let std dev of pdf defined on S indicates…
claudius
- 153
- 8