Abstract

Every day, the estimated volume of data which is generated per day is 2.6 quintillion bytes. From the last two years, there is a lot of data generation and execution is taking rise due to feasible technologies and devices. To make the information accessible with ease, we need to classify the information data and predict an accurate or at least an approximate expected result which is forwarded to the end user client. To achieve the said process, the information technology industries are more concerned with machine learning and edge computing. Machine learning is a integral subset of artificial intelligence. In machine learning, the foremost step towards achieving the above task is to observe the data which is produced in large amount, later classify the data to make the system learn (train) from the old data (experience) that is stored at the server level and finally predict an estimation as a result. The obtained result is been transformed onto the devices which have made a request for a particular data. These devices are remotely located at the corner of the central data center. The process in which the execution of the information data is done at the corner of the data center is called as edge computing. In today’s world of high computation, these two technologies i.e machine learning and edge computing are creating an overwhelming significance for its usage in the business market and end user clients. Here, we try to explain few possibilities of integrating the two technologies.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call