Oidc-agent - Integrating OpenID Connect Tokens with the Command Line

  • Abstract
  • Highlights & Summary
  • Literature Map
  • References
  • Similar Papers
Abstract
Translate article icon Translate Article Star icon
Take notes icon Take Notes

The oidc-agent is an OpenID Connect tool suite designed to simplify authentication processes for command-line applications and workflows that require access to resources protected by OpenID Connect. It provides a secure, but user-friendly way to manage tokens on the command-line, reducing the need for manual re-authentication. This paper presents an in-depth overview of the architecture and features of the tool suite, alongside its real-world applications. oidc-agent serves as a valuable tool in token based authentication workflows, particularly for applications in cloud computing, high-performance computing, and scientific research, where efficient and secure access to resources is critical.

Highlights

  • OpenID Connect (OIDC) is an important key technology used in token based authentication and authorisation infrastructures

  • Non interactive operation must be supported such as regularly called APIs, e.g. when monitoring services that require OIDC authentication

  • Application support is required since many applications struggle implementing appropriate OIDC integration on the client-side

Read more Highlights Expand/Collapse icon

Summary

IntroductionExpand/Collapse icon

OpenID Connect (OIDC) is an important key technology used in token based authentication and authorisation infrastructures. The only flows that can be non-interactive are the “resource owner credentials” and “client credential” flows, as well as the “refresh flow” For the former there are security concerns; the latter requires a previous authentication flow. The main goal of oidc-agent [3–5] is to enable usage of OIDC tokens on the command-line to securely support “non-web” use-cases. Several use-cases implement delegation scenarios, where ATs are used from remote services with and without direct network connection back to the initiating host. From these scenarios we derived general requirements that helped designing oidc-agent: Vol.:(0123456789)

Page 2 of 9Expand/Collapse icon
Related WorkExpand/Collapse icon
Page 4 of 9Expand/Collapse icon
Page 8 of 9Expand/Collapse icon
Similar Papers
  • Conference Article
  • Cite Count Icon 5
  • 10.1109/iciota.2017.8073634
Optimization of performance and scheduling of HPC applications in cloud using cloudsim and scheduling approach
  • May 1, 2017
  • D Boobala Muralitharan + 2 more

Cloud computing is emerging as a promising alternative to supercomputers for some High-Performance Computing (HPC) applications. Cloud computing is an essential component of the back bone of the Internet of Things (IoT). Clouds are needed to support huge numbers of interactions with varying quality requirements. Hence, Service quality will be a vital differentiator among cloud providers. In order to differentiate themselves from their competitors, cloud providers should offer best services that meet customers' expectations. A quality model can be used to represent, measure and compare the quality of the providers, such that a mutual understanding can be established among clouds take holders. With cloud as an additional deployment option, HPC users and providers faces the challenges of dealing with highly heterogeneous resources, where the variability spans across a wide range of processor configurations, interconnects, virtualization environments, and pricing models. HPC applications are increasingly being used in academia and laboratories for scientific research and in industries for business and analytics. Cloud computing offers the benefits of virtualization, elasticity of resources and elimination of cluster setup cost and time to HPC applications users. Effort was taken for holistic viewpoint to answer the questions — why and who should choose cloud for HPC, for what applications and how the cloud can be used for HPC? Comprehensive performance and cost evaluation and analysis of running a set of HPC applications on a range of platforms, varying from supercomputers to clouds was carried out. Further, performance of HPC applications is improved in cloud by optimizing HPC applications' characteristics for cloud and cloud virtualization mechanisms for HPC. In this paper, a novel heuristics for online application-aware job scheduling in multi-platform environments is presented. Experimental results and Simulations using CloudSim show that current clouds cannot substitute supercomputers but can effectively complement them.

  • Conference Article
  • Cite Count Icon 2
  • 10.1109/iceice.2017.8191887
Enabling high performance computing in cloud computing environments
  • Apr 1, 2017
  • M Kumaresan + 1 more

Cloud Computing is a server based model which provides shared pool of resources for the clients to access from a remote location. It provides various advantages to the users such as Pay-as-you-go model, elasticity, flexibility and dynamic customization offered by the virtualization. The present high speed networks and low cost devices led to the growth of cloud computing. Currently, High Performance Computing (HPC) applications are run in computing clusters which are set-up on their own. It requires ownership, high initial set-up cost and recurring maintenance cost which are unwanted burden for the HPC application users. So, it would be advantageous to use cloud service for the HPC applications which would result in huge savings. But, due to the interconnect bandwidth and heterogeneity involved in the cloud service, HPC applications provide poor performance in the cloud. We evaluate the current performance of HPC applications in the existing cloud infrastructures and then discuss various techniques to mitigate interference, virtualization overhead and problems due to shared resources in the cloud. In the end, we conclude with future works that can be done to ensure that HPC applications are more suitable for the cloud.

  • Research Article
  • Cite Count Icon 21
  • 10.1016/j.ieri.2014.09.072
Improving HPC Application Performance in Public Cloud
  • Jan 1, 2014
  • IERI Procedia
  • Rashid Hassani + 2 more

Improving HPC Application Performance in Public Cloud

  • Research Article
  • 10.9734/bjmcs/2016/27872
Identifying Most Relevant Performance Measures for Root Cause Analysis of Performance Degradation Events on a Private Cloud Computing Application: Experiment in an Industry Environment
  • Jan 10, 2016
  • British Journal of Mathematics & Computer Science
  • A Ravanello + 5 more

Cloud computing applications (CCA) are defined by their elasticity, on-demand provisioning and ability to address, cost-effectively, volatile workloads. These new cloud computing (CC) applications are being increasingly deployed by organizations but without a means of managing their performance proactively. While CCA provide advantages and disadvantages over traditional client-server applications, their unreliable application performance due to the intricacy and the high number of multi connected moving parts of its underlying infrastructure, has become a major challenge for software engineers and system administrators. For example, capturing how the end-users perceive the application performance as they complete their daily tasks has not been addressed satisfactorily. One possible approach for identifying the most relevant performance measures for Root Cause Analysis (RCA) of performance degradation events on CCA, from an end-user perspective, is to leverage the information captured in performance logs, a source of data that is widely available in today’s datacenters, and where detailed records of resource consumption and performance logs is captured from numerous systems, servers and network components used by the CCA. This paper builds on a model proposed for measuring CC application performance and extends it with the addition of the end-user perspective, exploring how it can be used in identifying root causes (RC) for performance degradation events in a large-scale industrial scenario. The experimentation required adjustments to the original proposal in order to determine, with the help of a multivariate statistical technique, the performance of a CCA from the perspective of an end-user. An experiment with a corporate email CCA is also presented and illustrates how the performance model can identify most relevant performance measures and help predict future performance issues.

  • PDF Download Icon
  • Research Article
  • Cite Count Icon 2
  • 10.5604/01.3001.0010.0158
Competitiveness of Polish enterprises in relation to the potential of cloud computing
  • Mar 29, 2017
  • Kwartalnik Nauk o Przedsiębiorstwie
  • Katarzyna Nowicka

Enterprises seek for possibilities to limit the costs and for areas that stimulate the level of innovation. Both of these aspects can be effectively supported by application of cloud computing, without a simultaneous need to make a choice of a trade-off type. The aim of the article is to prove that cloud computing provides entrepreneurs with the possibility to limit the costs and at the same time to support their activities related to the selected direction of innovation development. It has both a direct influence on the level and structure of costs in the enterprise, as well as an indirect influence, e.g. related to shortening the time for introduction of new solutions to the market, making decisions or limiting the costs of projects.

  • Research Article
  • Cite Count Icon 7
  • 10.1002/cpe.4090
Cloud computing and big data: Technologies and applications
  • Mar 29, 2017
  • Concurrency and Computation: Practice and Experience
  • Mostapha Zbakh + 2 more

Cloud computing and big data: Technologies and applications

  • Research Article
  • 10.1109/tmscs.2018.2871444
HPC Process and Optimal Network Device Affinitization
  • Oct 1, 2018
  • IEEE Transactions on Multi-Scale Computing Systems
  • Ravindra Babu Ganapathi + 2 more

High Performance Computing (HPC) applications have demanding need for hardware resources such as processor, memory, and storage. Applications in the area of Artificial Intelligence and Machine Learning are taking center stage in HPC, which is driving demand for increasing compute resources per node which in turn is pushing bandwidth requirement between the compute nodes. New system design paradigms exist where deploying a system with more than one high performance IO device per node provides benefits. The number of I/O devices connected to the HPC node can be increased with PCIe switches and hence some of the HPC nodes are designed to include PCIe switches to provide a large number of PCIe slots. With multiple IO devices per node, application programmers are forced to consider HPC process affinity to not only compute resources but extend this to include IO devices. Mapping of process to processor cores and the closest IO device(s) increases complexity due to three way mapping and varying HPC node architectures. While operating systems perform reasonable mapping of process to processor core(s), they lack the application developer's knowledge of process workflow and optimal IO resource allocation when more than one IO device is attached to the compute node. This paper is an extended version of our work published in [1] . Our previous work provided solution for IO device affinity choices by abstracting the device selection algorithm from HPC applications. In this paper, we extend the affinity solution to enable OpenFabric Interfaces (OFI) which is a generic HPC API designed as part of the OpenFabrics Alliance that enables wider HPC programming models and applications supported by various HPC fabric vendors. We present a solution for IO device affinity choices by abstracting the device selection algorithm from HPC applications. MPI continues to be the dominant programming model for HPC and hence we provide evaluation with MPI based micro benchmarks. Our solution is then extended to OpenFabric Interfaces which supports other HPC programming models such as SHMEM, GASNet, and UPC. We propose a solution to solve NUMA issues at the lower level of the software stack that forms the runtime for MPI and other programming models independent of HPC applications. Our experiments are conducted on a two node system where each node consists of two socket Intel Xeon servers, attached with up to four Intel Omni-Path fabric devices connected over PCIe. The performance benefits seen by applications by affinitizing processes with best possible network device is evident from the results where we notice up to 40 percent improvement in uni-directional bandwidth, 48 percent bi-directional bandwidth, 32 percent improvement in latency measurements, and up to 40 percent improvement in message rate with OSU benchmark suite. We also extend our evaluation to include OFI operations and an MPI benchmark used for Genome assembly. With OFI Remote Memory Access (RMA) operations we see a bandwidth improvement of 32 percent for fi_read and 22 percent with fi_write operations, and also latency improvement of 15 percent for fi_read and 14 percent for fi_write. K-mer MMatching Interface HASH benchmark shows an improvement of up to 25 percent while using local network device versus using a network device connected to remote Xeon socket.

  • Research Article
  • 10.2139/ssrn.3697310
Dynamic Resource Allocation Method Based on Symbiotic Organism Search Algorithm for HPC Application
  • Jun 26, 2020
  • SSRN Electronic Journal
  • Vidya Chitre + 1 more

Dynamic Resource Allocation Method Based on Symbiotic Organism Search Algorithm for HPC Application

  • Conference Article
  • Cite Count Icon 8
  • 10.1109/cloudcom.2011.94
An Initial Survey on Integration and Application of Cloud Computing to High Performance Computing
  • Nov 1, 2011
  • Tomasz Wiktor Wlodarczyk + 1 more

In this paper we survey state-of-the-art of integration and application of Cloud Computing (CC) to High Performance Computing (HPC). Motivation and general application areas are presented demonstrating particular focus on commoditization of HPC resources. Current experiments usually show significant performance differences between CC and HPC infrastructures and also programming models. However, recent research efforts aim at finding a common ground between those approaches. A conclusion emerges that some level of synthesis of CC and HPC is inevitable and probably beneficial for both, however, it requires further significant research efforts.

  • Conference Article
  • Cite Count Icon 2
  • 10.1117/12.2082519
OpenID connect as a security service in Cloud-based diagnostic imaging systems
  • Mar 17, 2015
  • Weina Ma + 4 more

The evolution of cloud computing is driving the next generation of diagnostic imaging (DI) systems. Cloud-based DI systems are able to deliver better services to patients without constraining to their own physical facilities. However, privacy and security concerns have been consistently regarded as the major obstacle for adoption of cloud computing by healthcare domains. Furthermore, traditional computing models and interfaces employed by DI systems are not ready for accessing diagnostic images through mobile devices. RESTful is an ideal technology for provisioning both mobile services and cloud computing. OpenID Connect, combining OpenID and OAuth together, is an emerging REST-based federated identity solution. It is one of the most perspective open standards to potentially become the de-facto standard for securing cloud computing and mobile applications, which has ever been regarded as “Kerberos of Cloud”. We introduce OpenID Connect as an identity and authentication service in cloud-based DI systems and propose enhancements that allow for incorporating this technology within distributed enterprise environment. The objective of this study is to offer solutions for secure radiology image sharing among DI-r (Diagnostic Imaging Repository) and heterogeneous PACS (Picture Archiving and Communication Systems) as well as mobile clients in the cloud ecosystem. Through using OpenID Connect as an open-source identity and authentication service, deploying DI-r and PACS to private or community clouds should obtain equivalent security level to traditional computing model.

  • Conference Article
  • Cite Count Icon 23
  • 10.1109/sapience.2016.7684127
A study of cloud computing environments for High Performance applications
  • Mar 1, 2016
  • Sajay K R + 1 more

High performance applications requires high processing power to compute highly intensive and complex applications for research, engineering, medical and academic projects. In the traditional way, an organization will have to pay very high costs to run an HPC (High Performance computing) application. The organization has to purchase highly expensive hardware for running an HPC application and maintaining it afterwards. The HPC resources on the company premises may not satisfy all the demands of scientific application where resources may not suitable for corresponding requirements. Considering the case of SMEs (small and medium enterprises), an increasing demand is always challenging. Cloud computing is an on-demand, pay-as-you-go-model, that offers scalable computing resources, unlimited storage in an instantly available way. In this paper we included requirements of HPC applications in cloud, cluster based HPC applications, types of clusters, Google's HPC Cloud architecture, performance analysis of various HPC cloud vendors and four case studies of HPC applications in cloud.

  • Research Article
  • Cite Count Icon 6
  • 10.1002/cpe.4517
Cloud computing and big data: Technologies and applications
  • May 20, 2018
  • Concurrency and Computation: Practice and Experience
  • Mostapha Zbakh + 3 more

Cloud computing and big data: Technologies and applications

  • Research Article
  • Cite Count Icon 3
  • 10.1088/1742-6596/2083/4/042087
Research and Application of Cloud Computing and Big Data Technology
  • Nov 1, 2021
  • Journal of Physics: Conference Series
  • Bingran Hui

The combined application of new network technologies with the traditional Internet and industry has enabled new businesses such as the Internet of Things and cloud computing to be gradually derived. To realize the construction of a modern society in China, great attention should be paid to the application of new network technologies such as cloud computing in actual work. Now through data integration and correlation analysis, the accuracy and innovation of information are further realized, and the overall work efficiency and work quality are improved. This paper systematically analyzes the network big data technology and applications of cloud computing and Internet of Things, and conducts research from data collection, data integration, data processing, and data governance. Its main purpose is to make the value and function of these new network technologies fully utilized

  • Conference Article
  • Cite Count Icon 1
  • 10.1109/bigdatasecurity-hpsc-ids.2016.16
A Secure Management Scheme Designed in Cloud
  • Apr 1, 2016
  • Peng-Yu Wang + 1 more

At present, the security has become an important issue in the field of cloud computing. The importance and urgency of the problem cannot be ignored. The popularization and application of cloud computing is a great challenge and opportunity in the field of information security in recent years. It can be seen that cloud computing security is changing to a hot spot for technology and academic research in cloud computing. This paper is aim to solve the problem of Cloud Security. According to the unique properties of cloud computing security, it proposes a secure management scheme in cloud computing. This scheme is used to detect the hacker attacks, illegal operation, potential threats and other security events in time based on big data. There are three main sections in the scheme: vulnerability scan, system log collection, correlation analysis. They are simply introduced and analyzed in this paper. Vulnerability scan: regular using of Nikto, Sandcat or other security tools for the cloud system to carry out the vulnerability scanning, regular network security self-testing, and building an extremely detailed scan report. System log collection: Using of Splunk, Nagios or other tools to collect the system log, also building a detailed log report. Correlation analysis: through the correlation analysis or canonical correlation analysis is used on the system log reports and vulnerability scanning reports, the attacker's attack will be found and the system will be issued a warning in time. At last, the scheme was built in the company's test environment, then the test environment was attacked by penetration test which is to simulate the act of hacking, the feasibility and function of it are verified by the testing.

  • Research Article
  • Cite Count Icon 71
  • 10.1109/tcc.2014.2339858
Evaluating and Improving the Performance and Scheduling of HPC Applications in Cloud
  • Jul 1, 2016
  • IEEE Transactions on Cloud Computing
  • Abhishek Gupta + 8 more

Cloud computing is emerging as a promising alternative to supercomputers for some high-performance computing (HPC) applications. With cloud as an additional deployment option, HPC users and providers are faced with the challenges of dealing with highly heterogeneous resources, where the variability spans across a wide range of processor configurations, interconnects, virtualization environments, and pricing models. In this paper, we take a holistic viewpoint to answer the question— why and who should choose cloud for HPC, for what applications, and how should cloud be used for HPC? To this end, we perform comprehensive performance and cost evaluation and analysis of running a set of HPC applications on a range of platforms, varying from supercomputers to clouds. Further, we improve performance of HPC applications in cloud by optimizing HPC applications’ characteristics for cloud and cloud virtualization mechanisms for HPC. Finally, we present novel heuristics for online application-aware job scheduling in multi-platform environments. Experimental results and simulations using CloudSim show that current clouds cannot substitute supercomputers but can effectively complement them. Significant improvement in average turnaround time (up to 2X)and throughput (up to 6X) can be attained using our intelligent application-aware dynamic scheduling heuristics compared tosingle-platform or application-agnostic scheduling.

More from: Computing and Software for Big Science
  • New
  • Research Article
  • 10.1007/s41781-025-00148-1
Enforcing Fundamental Relations via Adversarial Attacks on Input Parameter Correlations
  • Nov 5, 2025
  • Computing and Software for Big Science
  • Lucie Flek + 7 more

  • Research Article
  • 10.1007/s41781-025-00146-3
Application of Geometric Deep Learning for Tracking of Hyperons in a Straw Tube Detector
  • Oct 21, 2025
  • Computing and Software for Big Science
  • Adeel Akram + 5 more

  • Research Article
  • 10.1007/s41781-025-00133-8
Analysis Facilities for the HL-LHC White Paper
  • Jul 13, 2025
  • Computing and Software for Big Science
  • D Ciangottini + 65 more

  • Research Article
  • 10.1007/s41781-025-00143-6
Performance Portability of the Particle Tracking Algorithm Using SYCL
  • Jul 1, 2025
  • Computing and Software for Big Science
  • Bartosz Soból + 3 more

  • Research Article
  • 10.1007/s41781-025-00142-7
PhyLiNO: a forward-folding likelihood-fit framework for neutrino oscillation physics
  • Jul 1, 2025
  • Computing and Software for Big Science
  • Denise Hellwig + 4 more

  • Research Article
  • 10.1007/s41781-025-00140-9
SymbolFit: Automatic Parametric Modeling with Symbolic Regression
  • Jul 1, 2025
  • Computing and Software for Big Science
  • Ho Fung Tsoi + 8 more

  • Research Article
  • 10.1007/s41781-025-00141-8
A Downstream and Vertexing Algorithm for Long Lived Particles (LLP) Selection at the First High Level Trigger (HLT1) of LHCb
  • Jul 1, 2025
  • Computing and Software for Big Science
  • V Kholoimov + 4 more

  • Research Article
  • 10.1007/s41781-025-00137-4
oidc-agent - Integrating OpenID Connect Tokens with the Command Line
  • May 22, 2025
  • Computing and Software for Big Science
  • Gabriel Zachmann + 2 more

  • Research Article
  • 10.1007/s41781-025-00138-3
KAN We Improve on HEP Classification Tasks? Kolmogorov–Arnold Networks Applied to an LHC Physics Example
  • May 22, 2025
  • Computing and Software for Big Science
  • Johannes Erdmann + 2 more

  • Research Article
  • 10.1007/s41781-025-00139-2
An automated bandwidth division for the LHCb upgrade trigger
  • May 21, 2025
  • Computing and Software for Big Science
  • T Evans + 2 more

Save Icon
Up Arrow
Open/Close
  • Ask R Discovery Star icon
  • Chat PDF Star icon

AI summaries and top papers from 250M+ research sources.

Search IconWhat is the difference between bacteria and viruses?
Open In New Tab Icon
Search IconWhat is the function of the immune system?
Open In New Tab Icon
Search IconCan diabetes be passed down from one generation to the next?
Open In New Tab Icon