Abstract

Google Colaboratory (also known as Colab) is a cloud service based on Jupyter Notebooks for disseminating machine learning education and research. It provides a runtime fully configured for deep learning and free-of-charge access to a robust GPU. This paper presents a detailed analysis of Colaboratory regarding hardware resources, performance, and limitations. This analysis is performed through the use of Colaboratory for accelerating deep learning for computer vision and other GPU-centric applications. The chosen test-cases are a parallel tree-based combinatorial search and two computer vision applications: object detection/classification and object localization/segmentation. The hardware under the accelerated runtime is compared with a mainstream workstation and a robust Linux server equipped with 20 physical cores. Results show that the performance reached using this cloud service is equivalent to the performance of the dedicated testbeds, given similar resources. Thus, this service can be effectively exploited to accelerate not only deep learning but also other classes of GPU-centric applications. For instance, it is faster to train a CNN on Colaboratory’s accelerated runtime than using 20 physical cores of a Linux server. The performance of the GPU made available by Colaboratory may be enough for several profiles of researchers and students. However, these free-of-charge hardware resources are far from enough to solve demanding real-world problems and are not scalable. The most significant limitation found is the lack of CPU cores. Finally, several strengths and limitations of this cloud service are discussed, which might be useful for helping potential users.

Highlights

  • Deep learning applications are present in different aspects of our daily lives, such as web search engines, social network recommendations, natural language recognition, and e-commerce suggestions [1]

  • We show that Google Colaboratory can be effectively used to accelerate deep learning and other classes of Graphics processing units (GPU)-centric scientific applications

  • The main deep learning frameworks are programmed for NVIDIA GPUs, and the performance of the GPU provided by Colaboratory may be enough for several profiles of researchers and students

Read more

Summary

Introduction

Deep learning applications are present in different aspects of our daily lives, such as web search engines, social network recommendations, natural language recognition, and e-commerce suggestions [1]. This class of application usually rely on heavy computations on massive datasets. Graphics processing units (GPU) are massively parallel devices candidates to perform such a parallel task. This kind of accelerator is ubiquitous, accessible, and deliver a high GFlops/Dollar rate [2]. Many computer vision applications are intended to use the operating power of deep learning methods, such as Convolutional Neural Networks (CNN) [8]

Objectives
Methods
Results
Discussion
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call