Abstract

Information technology is based on data management between various sources. Software projects, as varied as simple applications or as complex as self-driving cars, are heavily reliant on the amounts, and types, of data ingested by one or more interconnected systems. Data is not only consumed but is transformed or mutated which requires copious amounts of computing resources. One of the most exciting areas of cyber-physical systems, autonomous vehicles, makes heavy use of deep learning and AI to mimic the highly complex actions of a human driver. Attempting to map human behavior (a large and abstract concept) requires large amounts of data, used by AIs to increase their knowledge and better attempt to solve complex problems. This paper outlines a full-fledged solution for managing resources in a multi-cloud environment. The purpose of this API is to accommodate ever-increasing resource requirements by leveraging the multi-cloud and using commercially available tools to scale resources and make systems more resilient while remaining as cloud agnostic as possible. To that effect, the work herein will consist of an architectural breakdown of the resource management API, a low-level description of the implementation and an experiment aimed at proving the feasibility, and applicability of the systems described.

Highlights

  • Not all files are supported by MIME classification, which is widely used by REST

  • Adaptability to multiple cloud solutions is a baseline requirement for any application or API working in a multi-cloud environment

  • Should the record be consumed by an agent, but a resource would not be provisioned the agent will place the negative response from Kubernetes in a dead letter queue (DLQ), natively provided by Kafka, and the controller will pick up the record from that DLQ and rehydrate the original topic

Read more

Summary

Introduction

Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations. The proposed solution would use services to connect all these layers together and ensure that multiple service implementations, perhaps one for each streaming interface, could work together [4] Another approach to handling such large amounts of data or managing resources, such as instances of VMs, Lambda expressions or load balancing is to, let the AI work for the end user’s benefit but for its own benefit as well. In this situation, a possible implementation would be the use of intelligent agents to retrieve information, data mine or do language processing [5]. A series of challenges and possible issues will be presented before concluding

Architecture
Terminology
Approach
REST API Architecture
Functionality
Development
Initially
Data Integrity
Experiment Setup
Experiment Results
Challenges and Future Improvements
Conclusions
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call