Oidc-agent - Integrating OpenID Connect Tokens with the Command Line
The oidc-agent is an OpenID Connect tool suite designed to simplify authentication processes for command-line applications and workflows that require access to resources protected by OpenID Connect. It provides a secure, but user-friendly way to manage tokens on the command-line, reducing the need for manual re-authentication. This paper presents an in-depth overview of the architecture and features of the tool suite, alongside its real-world applications. oidc-agent serves as a valuable tool in token based authentication workflows, particularly for applications in cloud computing, high-performance computing, and scientific research, where efficient and secure access to resources is critical.
Highlights
OpenID Connect (OIDC) is an important key technology used in token based authentication and authorisation infrastructures
Non interactive operation must be supported such as regularly called APIs, e.g. when monitoring services that require OIDC authentication
Application support is required since many applications struggle implementing appropriate OIDC integration on the client-side
Summary
OpenID Connect (OIDC) is an important key technology used in token based authentication and authorisation infrastructures. The only flows that can be non-interactive are the “resource owner credentials” and “client credential” flows, as well as the “refresh flow” For the former there are security concerns; the latter requires a previous authentication flow. The main goal of oidc-agent [3–5] is to enable usage of OIDC tokens on the command-line to securely support “non-web” use-cases. Several use-cases implement delegation scenarios, where ATs are used from remote services with and without direct network connection back to the initiating host. From these scenarios we derived general requirements that helped designing oidc-agent: Vol.:(0123456789)
13
- 10.17487/rfc8628
- Aug 1, 2019
1
- 10.1051/epjconf/202429501037
- Jan 1, 2024
- EPJ Web of Conferences
1
- 10.1051/epjconf/202429504054
- Jan 1, 2024
- EPJ Web of Conferences
1
- 10.5445/ir/1000134712
- Jan 1, 2021
7
- 10.1109/ares.2008.53
- Mar 1, 2008
9
- 10.17487/rfc8693
- Jan 1, 2020
13
- 10.17487/rfc7591
- Jul 1, 2015
200
- 10.1007/3-540-44681-8_116
- Jan 1, 2001
23
- 10.17487/rfc7636
- Sep 1, 2015
81
- 10.17487/rfc6750
- Oct 1, 2012
- Conference Article
5
- 10.1109/iciota.2017.8073634
- May 1, 2017
Cloud computing is emerging as a promising alternative to supercomputers for some High-Performance Computing (HPC) applications. Cloud computing is an essential component of the back bone of the Internet of Things (IoT). Clouds are needed to support huge numbers of interactions with varying quality requirements. Hence, Service quality will be a vital differentiator among cloud providers. In order to differentiate themselves from their competitors, cloud providers should offer best services that meet customers' expectations. A quality model can be used to represent, measure and compare the quality of the providers, such that a mutual understanding can be established among clouds take holders. With cloud as an additional deployment option, HPC users and providers faces the challenges of dealing with highly heterogeneous resources, where the variability spans across a wide range of processor configurations, interconnects, virtualization environments, and pricing models. HPC applications are increasingly being used in academia and laboratories for scientific research and in industries for business and analytics. Cloud computing offers the benefits of virtualization, elasticity of resources and elimination of cluster setup cost and time to HPC applications users. Effort was taken for holistic viewpoint to answer the questions — why and who should choose cloud for HPC, for what applications and how the cloud can be used for HPC? Comprehensive performance and cost evaluation and analysis of running a set of HPC applications on a range of platforms, varying from supercomputers to clouds was carried out. Further, performance of HPC applications is improved in cloud by optimizing HPC applications' characteristics for cloud and cloud virtualization mechanisms for HPC. In this paper, a novel heuristics for online application-aware job scheduling in multi-platform environments is presented. Experimental results and Simulations using CloudSim show that current clouds cannot substitute supercomputers but can effectively complement them.
- Conference Article
2
- 10.1109/iceice.2017.8191887
- Apr 1, 2017
Cloud Computing is a server based model which provides shared pool of resources for the clients to access from a remote location. It provides various advantages to the users such as Pay-as-you-go model, elasticity, flexibility and dynamic customization offered by the virtualization. The present high speed networks and low cost devices led to the growth of cloud computing. Currently, High Performance Computing (HPC) applications are run in computing clusters which are set-up on their own. It requires ownership, high initial set-up cost and recurring maintenance cost which are unwanted burden for the HPC application users. So, it would be advantageous to use cloud service for the HPC applications which would result in huge savings. But, due to the interconnect bandwidth and heterogeneity involved in the cloud service, HPC applications provide poor performance in the cloud. We evaluate the current performance of HPC applications in the existing cloud infrastructures and then discuss various techniques to mitigate interference, virtualization overhead and problems due to shared resources in the cloud. In the end, we conclude with future works that can be done to ensure that HPC applications are more suitable for the cloud.
- Research Article
21
- 10.1016/j.ieri.2014.09.072
- Jan 1, 2014
- IERI Procedia
Improving HPC Application Performance in Public Cloud
- Research Article
- 10.9734/bjmcs/2016/27872
- Jan 10, 2016
- British Journal of Mathematics & Computer Science
Cloud computing applications (CCA) are defined by their elasticity, on-demand provisioning and ability to address, cost-effectively, volatile workloads. These new cloud computing (CC) applications are being increasingly deployed by organizations but without a means of managing their performance proactively. While CCA provide advantages and disadvantages over traditional client-server applications, their unreliable application performance due to the intricacy and the high number of multi connected moving parts of its underlying infrastructure, has become a major challenge for software engineers and system administrators. For example, capturing how the end-users perceive the application performance as they complete their daily tasks has not been addressed satisfactorily. One possible approach for identifying the most relevant performance measures for Root Cause Analysis (RCA) of performance degradation events on CCA, from an end-user perspective, is to leverage the information captured in performance logs, a source of data that is widely available in today’s datacenters, and where detailed records of resource consumption and performance logs is captured from numerous systems, servers and network components used by the CCA. This paper builds on a model proposed for measuring CC application performance and extends it with the addition of the end-user perspective, exploring how it can be used in identifying root causes (RC) for performance degradation events in a large-scale industrial scenario. The experimentation required adjustments to the original proposal in order to determine, with the help of a multivariate statistical technique, the performance of a CCA from the perspective of an end-user. An experiment with a corporate email CCA is also presented and illustrates how the performance model can identify most relevant performance measures and help predict future performance issues.
- Research Article
2
- 10.5604/01.3001.0010.0158
- Mar 29, 2017
- Kwartalnik Nauk o Przedsiębiorstwie
Enterprises seek for possibilities to limit the costs and for areas that stimulate the level of innovation. Both of these aspects can be effectively supported by application of cloud computing, without a simultaneous need to make a choice of a trade-off type. The aim of the article is to prove that cloud computing provides entrepreneurs with the possibility to limit the costs and at the same time to support their activities related to the selected direction of innovation development. It has both a direct influence on the level and structure of costs in the enterprise, as well as an indirect influence, e.g. related to shortening the time for introduction of new solutions to the market, making decisions or limiting the costs of projects.
- Research Article
7
- 10.1002/cpe.4090
- Mar 29, 2017
- Concurrency and Computation: Practice and Experience
Cloud computing and big data: Technologies and applications
- Research Article
- 10.1109/tmscs.2018.2871444
- Oct 1, 2018
- IEEE Transactions on Multi-Scale Computing Systems
High Performance Computing (HPC) applications have demanding need for hardware resources such as processor, memory, and storage. Applications in the area of Artificial Intelligence and Machine Learning are taking center stage in HPC, which is driving demand for increasing compute resources per node which in turn is pushing bandwidth requirement between the compute nodes. New system design paradigms exist where deploying a system with more than one high performance IO device per node provides benefits. The number of I/O devices connected to the HPC node can be increased with PCIe switches and hence some of the HPC nodes are designed to include PCIe switches to provide a large number of PCIe slots. With multiple IO devices per node, application programmers are forced to consider HPC process affinity to not only compute resources but extend this to include IO devices. Mapping of process to processor cores and the closest IO device(s) increases complexity due to three way mapping and varying HPC node architectures. While operating systems perform reasonable mapping of process to processor core(s), they lack the application developer's knowledge of process workflow and optimal IO resource allocation when more than one IO device is attached to the compute node. This paper is an extended version of our work published in [1] . Our previous work provided solution for IO device affinity choices by abstracting the device selection algorithm from HPC applications. In this paper, we extend the affinity solution to enable OpenFabric Interfaces (OFI) which is a generic HPC API designed as part of the OpenFabrics Alliance that enables wider HPC programming models and applications supported by various HPC fabric vendors. We present a solution for IO device affinity choices by abstracting the device selection algorithm from HPC applications. MPI continues to be the dominant programming model for HPC and hence we provide evaluation with MPI based micro benchmarks. Our solution is then extended to OpenFabric Interfaces which supports other HPC programming models such as SHMEM, GASNet, and UPC. We propose a solution to solve NUMA issues at the lower level of the software stack that forms the runtime for MPI and other programming models independent of HPC applications. Our experiments are conducted on a two node system where each node consists of two socket Intel Xeon servers, attached with up to four Intel Omni-Path fabric devices connected over PCIe. The performance benefits seen by applications by affinitizing processes with best possible network device is evident from the results where we notice up to 40 percent improvement in uni-directional bandwidth, 48 percent bi-directional bandwidth, 32 percent improvement in latency measurements, and up to 40 percent improvement in message rate with OSU benchmark suite. We also extend our evaluation to include OFI operations and an MPI benchmark used for Genome assembly. With OFI Remote Memory Access (RMA) operations we see a bandwidth improvement of 32 percent for fi_read and 22 percent with fi_write operations, and also latency improvement of 15 percent for fi_read and 14 percent for fi_write. K-mer MMatching Interface HASH benchmark shows an improvement of up to 25 percent while using local network device versus using a network device connected to remote Xeon socket.
- Research Article
- 10.2139/ssrn.3697310
- Jun 26, 2020
- SSRN Electronic Journal
Dynamic Resource Allocation Method Based on Symbiotic Organism Search Algorithm for HPC Application
- Conference Article
8
- 10.1109/cloudcom.2011.94
- Nov 1, 2011
In this paper we survey state-of-the-art of integration and application of Cloud Computing (CC) to High Performance Computing (HPC). Motivation and general application areas are presented demonstrating particular focus on commoditization of HPC resources. Current experiments usually show significant performance differences between CC and HPC infrastructures and also programming models. However, recent research efforts aim at finding a common ground between those approaches. A conclusion emerges that some level of synthesis of CC and HPC is inevitable and probably beneficial for both, however, it requires further significant research efforts.
- Conference Article
2
- 10.1117/12.2082519
- Mar 17, 2015
The evolution of cloud computing is driving the next generation of diagnostic imaging (DI) systems. Cloud-based DI systems are able to deliver better services to patients without constraining to their own physical facilities. However, privacy and security concerns have been consistently regarded as the major obstacle for adoption of cloud computing by healthcare domains. Furthermore, traditional computing models and interfaces employed by DI systems are not ready for accessing diagnostic images through mobile devices. RESTful is an ideal technology for provisioning both mobile services and cloud computing. OpenID Connect, combining OpenID and OAuth together, is an emerging REST-based federated identity solution. It is one of the most perspective open standards to potentially become the de-facto standard for securing cloud computing and mobile applications, which has ever been regarded as “Kerberos of Cloud”. We introduce OpenID Connect as an identity and authentication service in cloud-based DI systems and propose enhancements that allow for incorporating this technology within distributed enterprise environment. The objective of this study is to offer solutions for secure radiology image sharing among DI-r (Diagnostic Imaging Repository) and heterogeneous PACS (Picture Archiving and Communication Systems) as well as mobile clients in the cloud ecosystem. Through using OpenID Connect as an open-source identity and authentication service, deploying DI-r and PACS to private or community clouds should obtain equivalent security level to traditional computing model.
- Conference Article
23
- 10.1109/sapience.2016.7684127
- Mar 1, 2016
High performance applications requires high processing power to compute highly intensive and complex applications for research, engineering, medical and academic projects. In the traditional way, an organization will have to pay very high costs to run an HPC (High Performance computing) application. The organization has to purchase highly expensive hardware for running an HPC application and maintaining it afterwards. The HPC resources on the company premises may not satisfy all the demands of scientific application where resources may not suitable for corresponding requirements. Considering the case of SMEs (small and medium enterprises), an increasing demand is always challenging. Cloud computing is an on-demand, pay-as-you-go-model, that offers scalable computing resources, unlimited storage in an instantly available way. In this paper we included requirements of HPC applications in cloud, cluster based HPC applications, types of clusters, Google's HPC Cloud architecture, performance analysis of various HPC cloud vendors and four case studies of HPC applications in cloud.
- Research Article
6
- 10.1002/cpe.4517
- May 20, 2018
- Concurrency and Computation: Practice and Experience
Cloud computing and big data: Technologies and applications
- Research Article
3
- 10.1088/1742-6596/2083/4/042087
- Nov 1, 2021
- Journal of Physics: Conference Series
The combined application of new network technologies with the traditional Internet and industry has enabled new businesses such as the Internet of Things and cloud computing to be gradually derived. To realize the construction of a modern society in China, great attention should be paid to the application of new network technologies such as cloud computing in actual work. Now through data integration and correlation analysis, the accuracy and innovation of information are further realized, and the overall work efficiency and work quality are improved. This paper systematically analyzes the network big data technology and applications of cloud computing and Internet of Things, and conducts research from data collection, data integration, data processing, and data governance. Its main purpose is to make the value and function of these new network technologies fully utilized
- Conference Article
1
- 10.1109/bigdatasecurity-hpsc-ids.2016.16
- Apr 1, 2016
At present, the security has become an important issue in the field of cloud computing. The importance and urgency of the problem cannot be ignored. The popularization and application of cloud computing is a great challenge and opportunity in the field of information security in recent years. It can be seen that cloud computing security is changing to a hot spot for technology and academic research in cloud computing. This paper is aim to solve the problem of Cloud Security. According to the unique properties of cloud computing security, it proposes a secure management scheme in cloud computing. This scheme is used to detect the hacker attacks, illegal operation, potential threats and other security events in time based on big data. There are three main sections in the scheme: vulnerability scan, system log collection, correlation analysis. They are simply introduced and analyzed in this paper. Vulnerability scan: regular using of Nikto, Sandcat or other security tools for the cloud system to carry out the vulnerability scanning, regular network security self-testing, and building an extremely detailed scan report. System log collection: Using of Splunk, Nagios or other tools to collect the system log, also building a detailed log report. Correlation analysis: through the correlation analysis or canonical correlation analysis is used on the system log reports and vulnerability scanning reports, the attacker's attack will be found and the system will be issued a warning in time. At last, the scheme was built in the company's test environment, then the test environment was attacked by penetration test which is to simulate the act of hacking, the feasibility and function of it are verified by the testing.
- Research Article
71
- 10.1109/tcc.2014.2339858
- Jul 1, 2016
- IEEE Transactions on Cloud Computing
Cloud computing is emerging as a promising alternative to supercomputers for some high-performance computing (HPC) applications. With cloud as an additional deployment option, HPC users and providers are faced with the challenges of dealing with highly heterogeneous resources, where the variability spans across a wide range of processor configurations, interconnects, virtualization environments, and pricing models. In this paper, we take a holistic viewpoint to answer the question— why and who should choose cloud for HPC, for what applications, and how should cloud be used for HPC? To this end, we perform comprehensive performance and cost evaluation and analysis of running a set of HPC applications on a range of platforms, varying from supercomputers to clouds. Further, we improve performance of HPC applications in cloud by optimizing HPC applications’ characteristics for cloud and cloud virtualization mechanisms for HPC. Finally, we present novel heuristics for online application-aware job scheduling in multi-platform environments. Experimental results and simulations using CloudSim show that current clouds cannot substitute supercomputers but can effectively complement them. Significant improvement in average turnaround time (up to 2X)and throughput (up to 6X) can be attained using our intelligent application-aware dynamic scheduling heuristics compared tosingle-platform or application-agnostic scheduling.
- New
- Research Article
- 10.1007/s41781-025-00148-1
- Nov 5, 2025
- Computing and Software for Big Science
- Research Article
- 10.1007/s41781-025-00146-3
- Oct 21, 2025
- Computing and Software for Big Science
- Research Article
- 10.1007/s41781-025-00133-8
- Jul 13, 2025
- Computing and Software for Big Science
- Research Article
- 10.1007/s41781-025-00143-6
- Jul 1, 2025
- Computing and Software for Big Science
- Research Article
- 10.1007/s41781-025-00142-7
- Jul 1, 2025
- Computing and Software for Big Science
- Research Article
- 10.1007/s41781-025-00140-9
- Jul 1, 2025
- Computing and Software for Big Science
- Research Article
- 10.1007/s41781-025-00141-8
- Jul 1, 2025
- Computing and Software for Big Science
- Research Article
- 10.1007/s41781-025-00137-4
- May 22, 2025
- Computing and Software for Big Science
- Research Article
- 10.1007/s41781-025-00138-3
- May 22, 2025
- Computing and Software for Big Science
- Research Article
- 10.1007/s41781-025-00139-2
- May 21, 2025
- Computing and Software for Big Science
- Ask R Discovery
- Chat PDF
AI summaries and top papers from 250M+ research sources.