I have a large number of financial time series that I wish to do cluster analysis on. Each time series has the same length and spans multiple years of daily data (returns, volatility, etc.). As part of my research, I wanted to compare the performance of K-means with the performance of more sophisticated clustering algorithms. I decided to simply stick with Euclidean distance as the similarity measure for K-means.
My issue is that I was unable to find any examples of how to represent such multivariate time series data when you do K-means clustering with Euclidean distance. My solution was to simply "flatten" each time series so a new variable was created for every variable at each time step. For example, for time series $s$, the open price at time step $t$ would become a new variable $o^{s}_{t}$.
Is this a meaningful approach or will the results of my cluster analysis be meaningless? I understand that if you were to, for example, do price forecasting, then flattening a multivariate time series like this would kill the temporal structure of the data. But since I do not care about forecasting and only wish to cluster the data, does my approach not make sense?