Application Programming Interface (API) Security in Cloud Applications
Many cloud services utilize an API gateway, which enables them to be offered to users through API platforms such as Platform as a Service (PaaS), Software as a service (SaaS), Infrastructure as a Service (IaaS) and cross-platforms APIs. APIs are designed for functionality and speed by developers who write a small portion of code, which has visibility and is secure. The code that is created from third-party software or libraries has no visibility, which makes it insecure. APIs are the most vulnerable points of attack, and many users are not aware of their insecurity. This paper reviews API security in cloud applications and discusses details of API vulnerabilities, existing security tools for API security to mitigate API attacks. The author’s study showed that most users are unaware of API insecurity, organizations lack resources and training to educate users about APIs, and organizations depend on the overall security of the network instead of the security of standalone APIs.
- Research Article
1
- 10.3897/biss.5.75267
- Sep 16, 2021
- Biodiversity Information Science and Standards
Web APIs (Application Programming Interfaces) facilitate the exchange of resources (data) between two functionally independent entities across a common programmatic interface. In more general terms, Web APIs can connect almost anything to the world wide web. Unlike traditional software, APIs are not compiled, installed, or run. Instead, data are read (or consumed in API speak) through a web-based transaction, where a client makes a request and a server responds. Web APIs can be loosely grouped into two categories within the scope of biodiversity informatics, based on purpose. First, Product APIs deliver data products to end-users. Examples include the Global Biodiversity Information Facility (GBIF) and iNaturalist APIs. Designed and built to solve specific problems, web-based Service APIs are the second type and the focus of this presentation (referred to as Service APIs). Their primary function is to provide on-demand support to existing programmatic processes. Examples of this type include Elasticsearch Suggester API and geolocation, a service that delivers geographic locations from spatial input (latitude and longitude coordinates) (Pejic et al. 2010). Many challenges lie ahead for biodiversity informatics and the sharing of global biodiversity data (e.g., Blair et al. 2020). Service-driven, standardized web-based Service APIs that adhere to best practices within the scope of biodiversity informatics can provide the transformational change needed to address many of these issues. This presentation will highlight several critical areas of interest in the biodiversity data community, describing how Service APIs can address each individually. The main topics include: standardized vocabularies, interoperability of heterogeneous data sources and data quality assessment and remediation. standardized vocabularies, interoperability of heterogeneous data sources and data quality assessment and remediation. Fundamentally, the value of any innovative technical solution can be measured by the extent of community adoption. In the context of Service APIs, adoption takes two primary forms: financial and temporal investment in the construction of clients that utilize Service APIs and willingness of the community to integrate Service APIs into their own systems and workflows. financial and temporal investment in the construction of clients that utilize Service APIs and willingness of the community to integrate Service APIs into their own systems and workflows. To achieve this, Service APIs must be simple, easy to use, pragmatic, and designed with all major stakeholder groups in mind, including users, providers, aggregators, and architects (Anderson et al. 2020Anderson et al. 2020; this study). Unfortunately, many innovative and promising technical solutions have fallen short not because of an inability to solve problems (Verner et al. 2008), rather, they were difficult to use, built in isolation, and/or designed without effective communication with stakeholders. Fortunately, projects such as Darwin Core (Wieczorek et al. 2012), the Integrated Publishing Toolkit (Robertson et al. 2014), and Megadetector (Microsoft 2021) provide the blueprint for successful community adoption of a technological solution within the biodiversity community. The final section of this presentation will examine the often overlooked non-technical aspects of this technical endeavor. Within this context, specifically how following these models can broaden community engagement and bridge the knowledge gap between the major stakeholders, resulting in the successful implementation of Service APIs.
- Research Article
- 10.4233/uuid:3d7bc400-2447-4a88-8768-3025d7b54b7f
- Oct 10, 2019
The practice of software engineering involves the combination of existing software components with new functionality to create new software. This is where an Application Programming Interface (API) comes in, an API is a definition of a set of functionality that can be reused by a developer to incorporate certain functionality in their codebase. Using an API can be challenging. For example, adopting a new API and correctly using the functionality can be challenging. One of the biggest issues with using an API, is that the API can evolve, with new features being added or existing features being modified or removed. Dealing with this challenge has led to an entire line of research on API evolution. In this thesis, we seek to understand to what extent API evolution more specifically API deprecation affects API consumers and how API consumers deal with the changing API. API producers can impact consumer behavior by adopting specific deprecation policies, to uncover the nature of this relationship, we investigate how and why the API producer deprecates the API and how this impacts the consumer. Deprecation is a language feature, i.e. one that language designers implement. Its implementation can vary across languages and thus the information that is conveyed by the deprecation mechanism can vary as well. The specific design decisions taken by the language designers can have a direct impact on consumer behavior when it comes to dealing with deprecation. We investigate the language designer perspective on deprecation and the impact of the design of a deprecation mechanism on the consumer. In this thesis, we investigate the relationship between API consumers, API producers, and language designers to understand how each has a role to play in reducing the burden of dealing with API evolution. Our findings show that out of the projects that are affected by deprecation of API elements, only a minority react to the deprecation of an API element. Furthermore, out of this minority, an even smaller proportion reacts by replacing the deprecated element with the recommended replacement. A larger proportion of the projects prefer to rollback the version of the API that they use so that they are not affected by deprecation, another faction of projects is more willing to replace the API with the deprecated element with another API. API producers have a direct impact on this behavior with the deprecation policy of the API having a direct impact on the consumer's decision to react to deprecation. If the API producer is more likely to clean up their code i.e. remove the deprecated element, then the consumers are likely to react to the deprecation of the element. This shows us that even for non-web-based APIs, the API producers can impact consumer behavior. We also, observe that the nature and content of the deprecation message can have an impact on consumer behavior. Consumers prefer to know when a deprecated feature is going to go away, what its replacement is and the reason behind the deprecation (informing them of the immediacy of reacting to the deprecation). The design of the deprecation mechanism needs to reflect these needs as the deprecation mechanism is the only direct way in which API producers can communicate with the consumer.
- Research Article
- 10.55124/jaim.v2i2.244
- Jan 1, 2024
- Journal of Artificial intelligence and Machine Learning
An Application Programming Interface (API) entails guidelines, Principles and an array of utilities, it stands as distinct software Applications interacting with one another communication enables A service tailored for developers, of the library or website Intrinsic operations without the need for comprehension in connection with that correspondence Approaches that can be employed and structures of data APIs establish.service or platform. Abstraction Layer: An API acts as an abstraction layer that separates the implementation details of a software component from its usage. It provides a standardized way for developers to access the functionality of another software component or service. Interoperability: The significance of application programming interfaces (APIs) in research is profound and multifaceted, impacting various fields and disciplines. APIs serve as crucial tools that enable researchers to interact with and harness the capabilities of existing software, platforms, and services. Here's why APIs are significant in research: Efficiency and Productivity: APIs allow researchers to access complex functionalities without having to reinvent the wheel. By integrating APIs, researchers can save time and effort, focusing more on the core research objectives. Data Access and Analysis: APIs provide access to vast amounts of data from different sources. Researchers can gather, analyze, and synthesize data from diverse platforms, expanding the scope and depth of their research. Interdisciplinary Collaboration: APIs facilitate collaboration across disciplines. Researchers from different domains can utilize APIs to combine tools and data, fostering interdisciplinary studies that address complex problems. Innovation: APIs encourage innovation by enabling researchers to build upon existing technologies. By integrating APIs creatively, researchers can develop novel solutions that were not feasible before. Replicability and Transparency: APIs enhance research transparency by allowing others to reproduce research procedures easily. The use of APIs ensures that methodologies and data processing steps can be replicated accurately. Customization: APIs enable researchers to customize tools and platforms according to their specific research needs. This flexibility empowers researchers to tailor solutions to their unique requirements.Automation: APIs facilitate the automation of repetitive tasks and data collection, reducing human errors and enhancing the reliability of research outcomes. Real-time Data: Many APIs provide real-time data updates. Researchers can access and analyze current information, making their studies more relevant and timelier. Experimentation and Prototyping: APIs enable researchers to quickly prototype ideas and experiment with different functionalities. This rapid iteration helps in refining research methodologies. Cross-platform Integration: APIs bridge the gap between different software and platforms, enabling seamless integration between tools that might not natively work together. Cost-effectiveness: Instead of building tools from scratch, researchers can utilize APIs, which often offer cost-effective solutions for specific research needs. Education and Training: APIs play a role in educating the next generation of researchers and programmers. Learning to work with APIs exposes students to real-world coding practices and software development concepts. The ARAS method for complex decision problems Trying to simplify and appropriate indicator (degree of application) "excellent" Through alternate exams It is in between alternative and the best solution Reflects difference and is different Eliminates the influence of units of measurement. ARAS technique might be used. A regular MCDM trouble is related to the project of Limited variety of results Ranking the options, each of them Based on various selection criteria are clearly described, in line with the ARAS method, decide a application characteristic fee. The relative effectiveness of the complexity of the viable opportunity. Google Maps API, Twitter API, Stripe API, Spotify API, OpenWeatherMap API and Twilio API.Documentation Quality, Ease of Integration, Functionality, Performance, Community and Support and Security. the Rank Application Programming Interface using the analysis of Addition Ratio Assessment (ARAS) Method. Twilio API is showing the highest value of rank whereas Twitter API is showing the lowest value.
- Research Article
- 10.55124/ijccsm.v1i1.236
- Jan 1, 2025
- International Journal of Cloud Computing and Supply Chain Management
An Application Programming Interface (API) entails guidelines, Principles and an array of utilities, it stands as distinct software Applications interacting with one another communication enables A service tailored for developers, of the library or website Intrinsic operations without the need for comprehension in connection with that correspondence Approaches that can be employed and structures of data APIs establish.service or platform. Abstraction Layer: An API acts as an abstraction layer that separates the implementation details of a software component from its usage. It provides a standardized way for developers to access the functionality of another software component or service. Interoperability: The significance of application programming interfaces (APIs) in research is profound and multifaceted, impacting various fields and disciplines. APIs serve as crucial tools that enable researchers to interact with and harness the capabilities of existing software, platforms, and services. Here's why APIs are significant in research: Efficiency and Productivity: APIs allow researchers to access complex functionalities without having to reinvent the wheel. By integrating APIs, researchers can save time and effort, focusing more on the core research objectives. Data Access and Analysis: APIs provide access to vast amounts of data from different sources. Researchers can gather, analyze, and synthesize data from diverse platforms, expanding the scope and depth of their research. Interdisciplinary Collaboration: APIs facilitate collaboration across disciplines. Researchers from different domains can utilize APIs to combine tools and data, fostering interdisciplinary studies that address complex problems. Innovation: APIs encourage innovation by enabling researchers to build upon existing technologies. By integrating APIs creatively, researchers can develop novel solutions that were not feasible before. Replicability and Transparency: APIs enhance research transparency by allowing others to reproduce research procedures easily. The use of APIs ensures that methodologies and data processing steps can be replicated accurately. Customization: APIs enable researchers to customize tools and platforms according to their specific research needs. This flexibility empowers researchers to tailor solutions to their unique requirements.Automation: APIs facilitate the automation of repetitive tasks and data collection, reducing human errors and enhancing the reliability of research outcomes. Real-time Data: Many APIs provide real-time data updates. Researchers can access and analyze current information, making their studies more relevant and timelier. Experimentation and Prototyping: APIs enable researchers to quickly prototype ideas and experiment with different functionalities. This rapid iteration helps in refining research methodologies. Cross-platform Integration: APIs bridge the gap between different software and platforms, enabling seamless integration between tools that might not natively work together. Cost-effectiveness: Instead of building tools from scratch, researchers can utilize APIs, which often offer cost-effective solutions for specific research needs. Education and Training: APIs play a role in educating the next generation of researchers and programmers. Learning to work with APIs exposes students to real-world coding practices and software development concepts. The ARAS method for complex decision problems Trying to simplify and appropriate indicator (degree of application) "excellent" Through alternate exams It is in between alternative and the best solution Reflects difference and is different Eliminates the influence of units of measurement. ARAS technique might be used. A regular MCDM trouble is related to the project of Limited variety of results Ranking the options, each of them Based on various selection criteria are clearly described, in line with the ARAS method, decide a application characteristic fee. The relative effectiveness of the complexity of the viable opportunity. Google Maps API, Twitter API, Stripe API, Spotify API, OpenWeatherMap API and Twilio API.Documentation Quality, Ease of Integration, Functionality, Performance, Community and Support and Security. the Rank Application Programming Interface using the analysis of Addition Ratio Assessment (ARAS) Method. Twilio API is showing the highest value of rank whereas Twitter API is showing the lowest value.
- Book Chapter
3
- 10.1007/978-3-030-59592-0_7
- Jan 1, 2020
Web Application Programming Interface (API) allows third-party and subscribed users to access data and functions of a software application through the network or the Internet. Web APIs expose data and functions to the public users, authorized users or enterprise users. Web API providers publish API documentations to help users to understand how to interact with web-based API services, and how to use the APIs in their integration systems. The exponential raise of the number of public web service APIs may cause a challenge for software engineers to choose an efficient API. The challenge may become more complicated when web APIs updated regularly by API providers. In this paper, we introduce a novel transformation-based approach which crawls the web to collect web API documentations (unstructured documents). It generates a web API Language model from API documentations, employs different machine learning algorithms to extract information and produces a structured web API specification that compliant to Open API Specification (OAS) format. The proposed approach improves information extraction patterns and learns the variety of structured and terminologies. In our experiment, we collect a sheer number of web API documentations. Our evaluation shows that the proposed approach find RESTful API documentations with 75% accuracy, constructs API endpoints with 84%, constructs endpoint attributes with 95%, and assigns endpoints to attributes with an accuracy 98%. The proposed approach were able to produces more than 2,311 OAS web API Specifications.
- Research Article
9
- 10.1108/lht-02-2022-0103
- Aug 18, 2022
- Library Hi Tech
PurposeThis study aims to identify the developer’s objectives, current state-of-the-art techniques, challenges and performance evaluation metrics, and presents outlines of a knowledge-based application programming interfaces (API) recommendation system for the developers. Moreover, the current study intends to classify current state-of-the-art techniques supporting automated API recommendations.Design/methodology/approachIn this study, the authors have performed a systematic literature review of studies, which have been published between the years 2004–2021 to achieve the targeted research objective. Subsequently, the authors performed the analysis of 35 primary studies.FindingsThe outcomes of this study are: (1) devising a thematic taxonomy based on the identified developers’ challenges, where mashup-oriented APIs and time-consuming process are frequently encountered challenges by the developers; (2) categorizing current state-of-the-art API recommendation techniques (i.e. clustering techniques, data preprocessing techniques, similarity measurements techniques and ranking techniques); (3) designing a taxonomy based on the identified objectives, where accuracy is the most targeted objective in API recommendation context; (4) identifying a list of evaluation metrics employed to assess the performance of the proposed techniques; (5) performing a SWOT analysis on the selected studies; (6) based on the developer’s challenges, objectives and SWOT analysis, presenting outlines of a recommendation system for the developers and (7) delineating several future research dimensions in API recommendations context.Research limitations/implicationsThis study provides complete guidance to the new researcher in the context of API recommendations. Also, the researcher can target these objectives (accuracy, response time, method recommendation, compatibility, user requirement-based API, automatic service recommendation and API location) in the future. Moreover, the developers can overcome the identified challenges (including mashup-oriented API, Time-consuming process, learn how to use the API, integrated problem, API method usage location and limited usage of code) in the future by proposing a framework or recommendation system. Furthermore, the classification of current state-of-the-art API recommendation techniques also helps the researchers who wish to work in the future in the context of API recommendation.Practical implicationsThis study not only facilitates the researcher but also facilitates the practitioners in several ways. The current study guides the developer in minimizing the development time in terms of selecting relevant APIs rather than following traditional manual selection. Moreover, this study facilitates integrating APIs in a project. Thus, the recommendation system saves the time for developers, and increases their productivity.Originality/valueAPI recommendation remains an active area of research in web and mobile-based applications development. The authors believe that this study acts as a useful tool for the interested researchers and practitioners as it will contribute to the body of knowledge in API recommendations context.
- Research Article
- 10.3897/biss.5.75372
- Sep 20, 2021
- Biodiversity Information Science and Standards
RESTful APIs (REpresentational State Transfer Application Programming Interfaces) are the most commonly used mechanism for biodiversity informatics databases to provide open access to their content. In its simplest form an API provides an interface based on the HTTP protocol whereby any client can perform an action on a data resource identified by a URL using an HTTP verb (GET, POST, PUT, DELETE) to specify the intended action. For example, a GET request to a particular URL (informally called an endpoint) will return data to the client, typically in JSON format, which the client converts to the format it needs. A client can either be custom written software or commonly used programs for data analysis such as R (programming language), Microsoft Excel (everybody’s favorite data management tool), OpenRefine, or business intelligence software. APIs are therefore a valuable mechanism for making biodiversity data FAIR (findable, accessible, interoperable, reusable). There is currently no standard specifying how RESTful APIs should be designed, resulting in a variety of URL and response data formats for different APIs. This presents a challenge for API users who are not technically proficient or familiar with programming if they have to work with many different and inconsistent data sources. We undertook a brief review of eight existing APIs that provide data about taxa to assess consistency and the extent to which the Darwin Core standard (Wieczorek et al. 2021) for data exchange is applied. We assessed each API based on aspects of URL construction and the format of the response data (Fig. 1). While only cursory and limited in scope, our survey suggests that consistency across APIs is low. For example, some APIs use nouns for their endpoints (e.g. ‘taxon’ or ‘species’), emphasising their content, whereas others use verbs (e.g. ‘search’), emphasising their functionality. Response data seldom use Darwin Core terms (two out of eight examples) and a wide range of terms can be used to represent the same concept (e.g. six different terms are used for dwc:scientificNameAuthorship). Terms that can be considered metadata for a response, such as pagination details, also vary considerably. Interestingly, the public interfaces for the majority of APIs assessed do not provide POST, PUT or DELETE endpoints that modify the database. POST is only used for providing more detailed request bodies to retrieve data than possible with GET. This indicates the primary use of APIs by biodiversity informatics platforms for data sharing. An API design guideline is a document that provides a set of rules or recommendations for how APIs should be designed in order to improve their consistency and useability. API design guidelines are typically created by particular organizations to standardize API development within the organization, or as a guideline for programmers using an organization’s software to build APIs (e.g., Microsoft and Google). The API Stylebook is an online resource that provides access to a wide range of existing design guidelines, and there is an abundance of other resources available online. This presentation will cover some of the general concepts of API design, demonstrate some examples of how existing APIs vary, and discuss potential options to encourage standardization. We hope our analysis, the available body of knowledge on API design, and the collective experience of the biodiversity informatics community working with APIs may help answer the question “Does TDWG need an API design guideline?”
- Research Article
- 10.3897/biss.5.75606
- Sep 23, 2021
- Biodiversity Information Science and Standards
The Global Biodiversity Information Facility (GBIF) runs a global data infrastructure that integrates data from more than 1700 institutions. Combining data at this scale has been achieved by deploying open Application Programming Interfaces (API) that adhere to the open data standards provided by Biodiversity Information Standards (TDWG). In this presentation, we will provide an overview of the GBIF infrastructure and APIs and provide insight into lessons learned while operating and evolving the systems, such as long-term API stability, ease of use, and efficiency. This will include the following topics: The registry component provides RESTful APIs for managing the organizations, repositories and datasets that comprise the network and control access permissions. Stability and ease of use have been critical to this being embedded in many systems. Changes within the registry trigger data crawling processes, which connect to external systems through their APIs and deposit datasets into GBIF's central data warehouse. One challenge here relates to the consistency of data across a distributed network. Once a dataset is crawled, the data processing infrastructure organizes and enriches data using reference catalogues accessed through open APIs, such as the vocabulary server and the taxonomic backbone. Being able to process data quickly as source data and reference catalogues change is a challenge for this component. The data access APIs provide search and download services. Asynchronous APIs are required for some of these aspects, and long-term stability is a requirement for widespread adoption. Here we will talk about policies for schema evolution to avoid incompatible changes, which would cause failures in client systems. The APIs that drive the user interface have specific needs such as efficient use of the network bandwidth. We will present how we approached this, and how we are currently adopting GraphQL as the next generation of these APIs. There are several APIs that we believe are of use for the data publishing community. These include APIs that will help in data quality aspects, and new data of interest thanks to the data clustering algorithms GBIF deploys. The registry component provides RESTful APIs for managing the organizations, repositories and datasets that comprise the network and control access permissions. Stability and ease of use have been critical to this being embedded in many systems. Changes within the registry trigger data crawling processes, which connect to external systems through their APIs and deposit datasets into GBIF's central data warehouse. One challenge here relates to the consistency of data across a distributed network. Once a dataset is crawled, the data processing infrastructure organizes and enriches data using reference catalogues accessed through open APIs, such as the vocabulary server and the taxonomic backbone. Being able to process data quickly as source data and reference catalogues change is a challenge for this component. The data access APIs provide search and download services. Asynchronous APIs are required for some of these aspects, and long-term stability is a requirement for widespread adoption. Here we will talk about policies for schema evolution to avoid incompatible changes, which would cause failures in client systems. The APIs that drive the user interface have specific needs such as efficient use of the network bandwidth. We will present how we approached this, and how we are currently adopting GraphQL as the next generation of these APIs. There are several APIs that we believe are of use for the data publishing community. These include APIs that will help in data quality aspects, and new data of interest thanks to the data clustering algorithms GBIF deploys.
- Research Article
27
- 10.1093/bioinformatics/btac017
- Jan 10, 2022
- Bioinformatics
SummaryTo meet the increased need of making biomedical resources more accessible and reusable, Web Application Programming Interfaces (APIs) or web services have become a common way to disseminate knowledge sources. The BioThings APIs are a collection of high-performance, scalable, annotation as a service APIs that automate the integration of biological annotations from disparate data sources. This collection of APIs currently includes MyGene.info, MyVariant.info and MyChem.info for integrating annotations on genes, variants and chemical compounds, respectively. These APIs are used by both individual researchers and application developers to simplify the process of annotation retrieval and identifier mapping. Here, we describe the BioThings Software Development Kit (SDK), a generalizable and reusable toolkit for integrating data from multiple disparate data sources and creating high-performance APIs. This toolkit allows users to easily create their own BioThings APIs for any data type of interest to them, as well as keep APIs up-to-date with their underlying data sources.Availability and implementationThe BioThings SDK is built in Python and released via PyPI (https://pypi.org/project/biothings/). Its source code is hosted at its github repository (https://github.com/biothings/biothings.api).Supplementary informationSupplementary data are available at Bioinformatics online.
- Research Article
1
- 10.1007/s10270-015-0473-1
- May 19, 2015
- Software & Systems Modeling
Modern software increasingly relies on using third-party libraries which are accessed via application programming interfaces (APIs). Libraries usually impose constraints on how API functions can be used (API usage rules) and programmers have to obey these API usage rules. However, API usage rules often are not well documented or documented informally. In this work, we show how to use the SCTPL and SLTPL logics to precisely and formally specify API usage rules in libraries, where SCTPL/SLTPL can be seen as an extension of the branching/linear temporal logic CTL/LTL with variables, quantifiers and predicates over the stack. This allows library providers to formally describe API usage rules without knowing how their libraries will be used by programmers. We propose an automated approach to check whether programs using libraries violate API usage rules or not. Our approach consists in modeling programs as pushdown systems (PDSs) and checking API usage rules by SCTPL/SLTPL model-checking for PDSs. To make the model-checking procedure more efficient and precise, we propose an abstraction that reduces drastically the size of the program model and integrate may-alias analysis into our approach to reduce false alarms. Moreover, we characterize two sublogics rSCTPL and rSLTPL of SCTPL and SLTPL that are preserved by the abstraction. We implement our techniques in a tool and apply the tool to check several open-source programs. Our tool finds several previously unknown bugs in several programs. The may-alias analysis avoids most of the false alarms that occur using SCTPL or SLTPL model-checking techniques without may-alias analysis.
- Conference Article
8
- 10.1145/3184558.3186966
- Jan 1, 2018
Application Programming Interface (API) exposes data and functions of a software application to third-party users. In digital business, API economy is one of the key component for determining the value of provided services. With the rise in number of publicly available APIs, understanding each API endpoint manually is not only labor intensive but it is also an error prone task for software engineers. Due to the complexity of understanding the sheer number of APIs, it is difficult for software developers to find the best possible API combinations (i.e. API Mashups). In this demonstration, we introduce API Learning platform which employs machine-learning based technologies to efficiently search APIs, validate APIs, and generate API mashups. These technologies enable a machine to automatically generate machine-readable API specification from API documentations, understand variety of APIs, validate extracted information through automated API validation, and finally recommend API mashups for a specific purpose. As of now, API Learning platform collected over 14,000 API documentations and generates a machine readable format for REST APIs with an accuracy of 84%. The proposed demo prototype shows how it enables users to quickly find relevant APIs, automatically verify API availability, and get the best possible API mashup recommendations.
- Conference Article
36
- 10.1145/3361149.3361164
- Jul 3, 2019
Remote Application Programming Interfaces (APIs) are technology enablers for distributed systems and cloud-native application development. API providers find it hard to design their remote APIs so that they can be evolved easily; refactoring and extending an API while preserving backward compatibility is particularly challenging. If APIs are evolved poorly, clients are critically impacted; high costs to adapt and compensate for downtimes may result. For instance, if an API provider publishes a new incompatible API version, existing clients might break and not function properly until they are upgraded to support the new version. Hence, applying adequate strategies for evolving service APIs is one of the core problems in API governance, which in turn is a prerequisite for successfully integrating service providers with their clients in the long run. Although many patterns and pattern languages are concerned with API, service design, and related integration technologies, patterns guiding the evolution of APIs are missing to date. Extending our emerging pattern language on Microservice API Patterns (MAP), we introduce a set of patterns focusing on API evolution strategies in this paper: API Description, Version Identifier, Semantic Versioning, Eternal Lifetime Guarantee, Limited Lifetime Guarantee, Two in Production, Aggressive Obsolescence, and Experimental Preview. The patterns were mined from public Web APIs and industry projects the authors had been involved in.
- Research Article
1
- 10.1108/dprg-10-2020-0147
- Oct 20, 2021
- Digital Policy, Regulation and Governance
Purpose Some specialized consultancies have been making the case of an “API economy”. This study aims to investigate the issue, marshalling data on the economic dimension, to better understand the environments of APIs. It offers an overview of the functions and definition of application programming interfaces (APIs) in the backdrop of the history of services computing. The paper attempt assessing the economic value (size of the market) of APIs and reviews some of the available metrics. The paper also takes a look some issues and challenges ahead for the deployment of all kind of APIs. Design/methodology/approach The paper is based on desk research and a scientific and grey literature review. However, it relies mostly on specialized consultancies although from a critical viewpoint. The paper provides an historical account of the notions of APIs and API economy. Findings The paper questions the idea of an “API economy” that still stands on the “hype” side and is not clearly substantiated. It reveals that the number of firms with mature API programs remains small and that there is an uneven development across industries (traditional firms are less active than digital natives) and countries (Silicon Valley is leading). It highlights that the domination of IT companies (leaders and pioneers of APIs) raises issue of competition and at some point, may prevent rather than foster innovation. Research limitations/implications There is no robust data about the size of the API market nor about its value. Sources are highly heterogeneous and delimitations not always precise. The standard metrics or indicators are hard to find. Further research would be needed to better document this area. Practical implications The paper reviews some of the expected benefits of the use of APIs as enablers of private or public ecosystems. Social implications The paper delineates some of the economic benefits of the public APIs based on open data. It shows some positive examples of public APIs in the EU. Originality/value There is hardly any mention of the API economy in research literature. Most of the academic literature still stems from engineering department or business-management departments, not department of economics. Consultants would usually focus on the potential of business growth, on how to design an effective API strategy but not on the very economic dimension. The paper attempts providing a synthesis of the available data.
- Research Article
16
- 10.1147/jrd.2016.2518818
- Mar 1, 2016
- IBM Journal of Research and Development
Cloud-enabled applications and services increasingly consume other services through web application programming interfaces (APIs). API ecosystems support both the production and the consumption of APIs. For service providers seeking to externalize their APIs, API ecosystems help publish, promote, and provision such APIs. For applications or services consuming APIs, API ecosystems unify how APIs are presented and composed. A key challenge for API ecosystems is the continuous collection of information on APIs and the utilization of the information for the benefit of all actors in the ecosystem. In this work, we present the design of API Harmony, a service to support developers in identifying, selecting, and consuming APIs. API Harmony builds on our previous work on building an API Graph, which enables the continuous collection of API information and analysis operations for API providers, consumers, and ecosystem providers. In this paper, we revise the API Graph and describe how we utilize its latest version in API Harmony for API search and selection. Furthermore, we describe how we implemented API Harmony and present an evaluation of its capabilities compared with existing solutions.
- Conference Article
4
- 10.1109/scc55611.2022.00017
- Jul 1, 2022
Application Programming Interfaces (APIs) define how Web services, middle-wares, frameworks, and libraries communicate with their clients. An API that conforms to REpresentational State Transfer (REST) design principles is known as REST API. At present, it is an industry-standard for interaction among Web services. There exist mainly three categories of APIs: public, partner, and private. Public APIs are designed for external consumers, whereas partner APIs are designed aiming at organizational partners. In contrast, private APIs are designed solely for internal use. The API quality matters regardless of their category and intended consumers. To assess the (linguistic) design of APIs, researchers defined linguistic patterns (i.e., best API design practices) and linguistic antipatterns (i.e., poor API design practices.) APIs that follow linguistic patterns are easy to understand, use, and maintain. In this study, we analyze and compare the design quality of public, partner, and private APIs. More specifically, we made a large survey by analyzing and performing the detection of nine linguistic patterns and their corresponding antipatterns on more than 2,500 end-points from 37 APIs. Our results suggest that (1) public, partner, and private APIs lack quality linguistic design, (2) among the three API categories, private APIs lack linguistic design the most, and (3) end-points are amorphous, contextless, and non-descriptive in partner APIs. End-points have contextless design and poor documentation regardless of the API categories.
- Ask R Discovery
- Chat PDF