What is the advantage of Data Science Specific CI/CD (kubeflow, Algo, TFX, mlflow, sagemaker pipelines) vs the already baked flavors that are more generic: Jenkins, Bamboo, Airflow, Google Cloud Build, ...
My guess is the Data Science ones give more structure around the common ML operations and are better optimized for compute and memory needed to train, deploy, doing things in parallel and run inference on models?