Abstract

AbstractServerless computing and, in particular, the functions as a service model has become a convincing paradigm for the development and implementation of highly scalable applications in the cloud. This is due to the transparent management of three key functionalities: triggering of functions due to events, automatic provisioning and scalability of resources, and fine‐grained pay‐per‐use. This article presents a serverless web‐based scientific gateway to execute the inference phase of previously trained machine learning and artificial intelligence models. The execution of the models is performed both in Amazon Web Services and in on‐premises clouds with the OSCAR framework for serverless scientific computing. In both cases, the computing infrastructure grows elastically according to the demand adopting scale‐to‐zero approaches to minimize costs. The web interface provides an improved user experience by simplifying the use of the models. The usage of machine learning in a computing platform that can use both on‐premises clouds and public clouds constitutes a step forward in the adoption of serverless computing for scientific applications.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call