Abstract

Building machine learning (ML) pipelines, as well as deep learning (DL) pipelines, require expertise, including ML algorithms, various choices of ML frameworks, data ingestion, processing, and visualization techniques. The operationalization of the ML pipeline in an efficient manner demands knowledge in system design and implementation in distributed systems across the cloud-edge spectrum. To address the development and deployment challenges of ML pipelines, we present Stratum, an ML-as-a-service, that provides several unique capabilities as follows. First, it offers a collaborative and version-controlled graphical environment for the developers to build and operationalize ML pipelines without domain expertise. Second, it provides a language for the automated development and operationalization of the ML pipeline using the desired ML frameworks on the target infrastructure. Third, we integrate a resource monitoring and management interface to facilitate pluggable resource management logic to scale the ML pipeline components across the cloud-edge spectrum. In this chapter, we outline ML pipeline design principles, system design principles, and the lessons learned in building ML-as-a-service using case studies. Lockheed Martin Advanced Technology Labs

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call