You can indeed use other weak learners (as the components of an ensemble are commonly called) than just decision trees. That said, decision tree ensembles are most widely used, especially gradient boosted trees and random forest.
Sometimes, other ensembles are just a conceptual tool to facilitate analysis of algorithms, like when you're trying to understand dropout in neural networks from an ensemble perspective (see this paper). Sometimes, they are being used to improve performance of models, especially in competition settings. When you look at solutions performing well in Kaggle competitions, you will often see complex ensembles.
Ensembles need not be composed of weak learners from the same model class. For example, a recent paper finds evidence for good performance of ensembles which combine gradient boosted tree models (which, of course, are themselves ensembles) with certain neural networks.
If you're looking for a simple way to play around with non-decision tree ensembles, you might want to take a look at XGBoost. It's not only one of the two most important implementations of decision-tree based ensembles (the other is LightGBM), but it also implements boosting ensembles of linear functions.