It is relatively simple if you understand what variance refers to in this context. A model has high variance if it is very sensitive to (small) changes in the training data.
A decision tree has high variance because, if you imagine a very large tree, it can basically adjust its predictions to every single input.
Consider you wanted to predict the outcome of a soccer game. A decision tree could make decisions like:
IF
- player X is on the field AND
- team A has a home game AND
- the weather is sunny AND
- the number of attending fans >= 26000 AND
- it is past 3pm
THEN team A wins.
If the tree is very deep, it will get very specific and you may only have one such game in your training data. It probably would not be appropriate to base your predictions on just one example.
Now, if you make a small change e.g. set the number of attending fans to 25999, a decision tree might give you a completely different answer (because the game now doesn't meet the 4th condition).
Linear regression, for example, would not be so sensitive to a small change because it is limited ("biased" -> see bias-variance tradeoff) to linear relationships and cannot represent sudden changes from 25999 to 26000 fans.
That's why it is important to not make decision trees arbitrary large/deep. This limits its variance.
(See e.g. here for more on how random forests can help with this further.)