The main stages of Machine Learning Pipelines are considered in the paper, such as: train data collection and storage, training and scoring. The effect of the Big Data phenomenon on each of the stages is discussed. Different approaches to efficient organization of computation are on each of the stage are evaluated. In the first part of the paper we introduce the notion of horizontal and vertical scalability together with corresponding cons and pros. We consider some limitations of scaling, such as Amdahl's law. In the second part of the paper we consider scalability of data storage routines. First we discuss relational databases and scalability limitations related to ACID guarantees, which such database satisfy. Then we consider horizontally scalable non-relational databases, so called NoSQL databases. We formulate CAP-theorem as a fundamental limitation of horizontally scalable databases. The third part of the paper is dedicated to scalability of computation based on the MapReduce programming model. We discuss some implementations of this programming model, such as Hadoop and Spark together with some basic principles which they are based on. In the fourth part of the article we consider various approaches towards scaling of Machine Learning methods. We give the general statement of Machine Learning problem. Then we show how MapReduce programming model can be applied for horizontal scaling of Machine Learning methods on the example of Bayessian pattern recognition procedure. On the example of Deep Neural Networks we discuss Machine Learning methods which are not horizontally scalable. Then we consider some approaches towards vertical scaling of such methods based on GPU’s and the TensorFlow programming model.
Read full abstract