Published in last 50 years
Articles published on Microservices
- New
- Research Article
- 10.35546/kntu2078-4481.2025.1.2.51
- Nov 5, 2025
- Вісник Херсонського національного технічного університету
- D Rumiantsev
Financial institutions increasingly rely on rapidly processing vast, heterogeneous data streams for effective risk management and regulatory compliance. However, integrating and enriching this data in near real-time presents significant architectural challenges, particularly at scale. This paper details the design, implementation, and impact of a high-availability, microservices-based platform developed to automate data processing for a central Eastern European bank. The primary objective was to create a modular, scalable, and fault-tolerant system capable of managing massive data volumes while ensuring data integrity and low-latency delivery of insights. A case study methodology was employed to document the system's architecture. The platform utilizes an event-driven, asynchronous model built with Java and the Spring Boot framework. Core components include decoupled microservices for data ingestion, normalization, enrichment, and delivery, orchestrated via RabbitMQ message queues. A novel dispatcher service was implemented to manage entity-level concurrency control, preventing race conditions during parallel data processing. The system's performance and health are monitored through an integrated observability stack comprising Prometheus, Grafana, Loki, and Zipkin. The platform successfully processes over 160 million data units daily from over 20 disparate sources. It reduced data processing latency from several days to under 15 minutes and consistently meets a critical service-level agreement (SLA) of delivering search query results in less than 10 seconds. Automation eliminated manual data handling, reduced data quality error rates by approximately 87%, and significantly enhanced the bank’s ability to detect adverse financial events in near real-time. This case study validates the efficacy of a microservices architecture combined with explicit concurrency control for building high-throughput FinTech data platforms. The presented design is a practical blueprint for engineering resilient, scalable, and observable systems to address complex data integration challenges in the financial sector and other dataintensive industries.
- New
- Research Article
- 10.30574/wjarr.2025.28.1.3468
- Oct 30, 2025
- World Journal of Advanced Research and Reviews
- Kostadin Almishev
Against the backdrop of the expansion of microservice architectures and Platform Engineering practices, monorepositories serve as a key mechanism for standardizing the software development lifecycle. At the same time, their growth exacerbates the phenomenon whereby version incompatibilities block the work of multiple teams. This study aims to systematize and analyze approaches to dependency management in scalable monorepositories in order to develop a holistic methodology for preventing and resolving version conflicts. The methodological basis includes a systematic review of academic publications, content analysis of technical documentation, and a comparative examination of industry reports. The results demonstrate the limited preventive effectiveness of semantic versioning (SemVer) with respect to compatibility errors and establish a taxonomy of management strategies: proactive (centralized version control), reactive (dependency harmonization), and automated (use of intelligent build systems). Case studies confirm that tool selection should correlate with the scale of development, implying an evolutionary transition from basic orchestrators to industrial-grade build tools such as Bazel. It is concluded that high effectiveness in dependency management is achieved through the synergy of organizational regulation, conflict resolution procedures, and the use of intelligent build systems with fine, granular analysis of the dependency graph. The practical significance of the work lies in providing architects and platform teams with a scientifically verified foundation for decision-making in the design and operation of large-scale software systems.
- New
- Research Article
- 10.24246/itexplore.v4i3.2025.pp334-351
- Oct 26, 2025
- IT-Explore: Jurnal Penerapan Teknologi Informasi dan Komunikasi
- Andrean Vini Bimo Arya Wibowo + 1 more
Conventional attendance systems often face various problems such as inefficiency, inaccuracies in attendance logging, and limitations in recapitulation processes. Manual systems are prone to human error and time-consuming, while fingerprint-based systems may fail when the sensor is affected by dirty, wet, or damaged fingers. This study aims to develop an attendance system based on Artificial Intelligence (AI) by utilizing the face_recognition function in Python and implementing a microservice architecture to improve efficiency and accuracy in attendance recording. The system is developed using the Agile Feature-Driven Development (FDD) method, which focuses on building system features based on prioritized business values. This method is applied within the Software Development Life Cycle (SDLC) to ensure a structured, iterative, and user-oriented development process. Facial recognition is performed by comparing the encoding of the captured face image with the data stored in the database. The results show that the system is capable of recording attendance automatically, accurately, and in real-time. Furthermore, the recapitulation process becomes more efficient as the system manages and presents the data systematically.
- New
- Research Article
- 10.22399/ijcesen.4172
- Oct 24, 2025
- International Journal of Computational and Experimental Science and Engineering
- Naveen Kumar Kasarla
DevOps teams struggle with incident management in distributed systems where traditional monitoring creates more problems than solutions. Alert storms overwhelm operations centers while genuine issues hide among thousands of false positives. Engineers waste time correlating data from dozens of different tools instead of fixing actual problems that impact users. Most organizations handle incidents the hard way. Systems break, alerts fire, and teams scramble to understand what happened while customers complain. This reactive cycle burns through engineering talent and damages business relationships during extended outages. Manual correlation across microservice architectures becomes impossible as systems grow more complex. Intelligent operations platforms address this operational chaos by processing massive data volumes that overwhelm individual engineers during crises. Algorithmic models identify subtle system behaviors that signal developing problems, catching potential failures before they impact end users or cascade across service dependencies. These platforms adapt their detection capabilities based on observed incident histories and changing infrastructure patterns. Organizations deploying intelligent operations report substantial improvements in incident response metrics. Automated correlation eliminates hours of manual investigation, while predictive analytics enable proactive maintenance during scheduled windows rather than emergencies. Teams finally escape the constant firefighting that prevents strategic infrastructure improvements and architectural optimization.
- Research Article
- 10.61132/venus.v3i5.1110
- Oct 14, 2025
- Venus: Jurnal Publikasi Rumpun Ilmu Teknik
- Ni Made Ardhiya Shita Pramesti Dewi + 2 more
This research discusses the development of a school Geographic Information System (GIS) based on a microservice architecture to simplify access and management of school data. The background of this study is the need for an efficient and well-organized school data management system that can present school information interactively to the public. The purpose of this research is to build a system capable of displaying school locations and providing data management features for teachers, students, and school accreditation through CRUD (Create, Read, Update, Delete) operations. The development method includes database design, API creation for each microservice, data integration through an interactive map interface using Leaflet, and system testing using the Black Box Testing method. The test results show that all system features function properly and meet user requirements. The implementation of microservice architecture allows the system to be more flexible, easily updated, and well distributed among services. With this system, the public can access school information quickly and accurately, while schools can manage their data more effectively.
- Research Article
- 10.52589/ajste-o1g0v4go
- Oct 14, 2025
- Advanced Journal of Science, Technology and Engineering
- Bulus, S W + 4 more
The transformation from monolithic to microservices architectures in e-commerce systems has gained significant traction as organizations seek enhanced scalability, resilience, and operational agility. This systematic review examines the current state of research on migrating monolithic e-commerce systems to microservices using event-driven architecture (EDA) approaches. Following PRISMA guidelines, we analyzed 11 relevant studies focusing on decomposition frameworks, migration methodologies, and implementation strategies specific to e-commerce contexts. Our findings reveal that event-driven architectures provide superior scalability and fault tolerance compared to traditional synchronous communication patterns, with asynchronous messaging showing 30-40% better performance under high load conditions. Key migration approaches include the Strangler Fig Pattern, Domain-Driven Design (DDD) decomposition, and process mining-based identification of service boundaries. However, challenges persist in data consistency management, service communication complexity, and organizational alignment during migration. The review identifies critical gaps in standardized metrics for evaluating migration success and limited tooling support for automated decomposition in e-commerce-specific contexts. This study contributes to the understanding of event-driven microservices migration patterns and provides actionable insights for practitioners undertaking e-commerce modernization initiatives.
- Research Article
- 10.63278/jicrcr.vi.3324
- Oct 11, 2025
- Journal of International Crisis and Risk Communication Research
- Sriram Ramakrishnan
This article presents a framework for designing disaster-resistant microservice architectures leveraging AWS PrivateLink, multi-region service meshes, and advanced service discovery mechanisms. The article examines key integration patterns for AWS App Mesh federation across regions, both control plane redundancy models and data plane resilience strategies that maintain service availability during regional outages. The article shows service discovery mechanisms for regional failover, comparing DNS-based and API-based discovery approaches while addressing latency considerations in cross-region deployments. Traffic management strategies during regional events are analyzed, including blue/green deployment methodologies, progressive traffic shifting techniques, circuit breaking configurations, and the tradeoffs between automatic failover and controlled degradation. The article concludes with implementation best practices covering security posture for cross-region connectivity, cost optimization approaches for redundant infrastructure, observability requirements across regional boundaries, and validation testing methodologies for disaster scenarios. Through enterprise implementations, this article provides actionable architectural guidance for organizations seeking to build resilient microservice systems that maintain operational integrity during catastrophic regional failures.
- Research Article
- 10.63278/jicrcr.vi.3322
- Oct 11, 2025
- Journal of International Crisis and Risk Communication Research
- Hanumantha Rao Bodapati
A paradigm change in the design of automotive emergency response facilities is the shift towards roadside assistance, where the traditional analog coordination is converted into digital service delivery. This change involves the use of sophisticated service-oriented architectures that handle the emergency incidents as separate events by a standardized processing pipeline. State-of-the-art platforms employing distributed microservices architectures, event-based systems, and machine learning algorithms optimize provider selection and allocation of resources. The digitization generates quantifiable value in various stakeholder groups due to increased operational efficiency, better service transparency, and new capability development. The benefits achieved by customers include vastly decreased response time, the absence of information uncertainties, and access to high-quality service levels. The insurers achieve operational efficiencies in terms of lower call center volumes and automated claims transactions. Service providers enjoy smart job distribution algorithms and resource use. Its deployment entails vigorous privacy shield laws and extensive data governance schemes, balancing operational efficiency and regulation compliance in various jurisdictions. The high level of geofencing and dynamic frozen boundaries qualifies the responsible handling of data without interrupting services.
- Research Article
- 10.59573/emsj.9(5).2025.88
- Oct 1, 2025
- European Modern Studies Journal
- Hemasree Koganti
The financial services industry has undergone a profound architectural transformation as institutions abandon traditional monolithic systems in favor of distributed microservices architectures that better align with modern business demands for agility, scalability, and regulatory compliance. This comprehensive article examines the cutting-edge innovations in microservices architecture specifically tailored for financial environments, exploring how container orchestration technologies, service mesh implementations, and event-driven patterns address the unique challenges of maintaining data consistency, ensuring robust security, and meeting stringent regulatory requirements in distributed systems. The article reveals that successful microservices adoption in financial services requires sophisticated approaches to distributed data management, performance optimization for latency-sensitive operations, and comprehensive observability frameworks that enable effective monitoring and troubleshooting across complex service topologies. Contemporary implementations demonstrate innovative solutions for managing distributed transactions through saga patterns, implementing zero-trust security models, and achieving regulatory compliance through automated audit trails and policy enforcement mechanisms. The article of real-world case studies from major financial institutions and fintech organizations illustrates both the transformative potential and inherent complexities of microservices architectures, highlighting critical success factors including gradual migration strategies, organizational restructuring, and substantial investments in platform automation and team capabilities. Emerging trends toward serverless computing, artificial intelligence integration, and quantum-safe security preparations indicate that the architectural evolution will continue accelerating, requiring financial institutions to develop adaptive technology strategies that balance innovation with the stability and compliance requirements fundamental to financial services operations.
- Research Article
- 10.1016/j.compeleceng.2025.110550
- Oct 1, 2025
- Computers and Electrical Engineering
- Neha Kaushik + 2 more
A systematic review of QoS enhancement techniques in microservices architecture
- Research Article
- 10.63345/ijrmeet.org.v13.i10.2
- Oct 1, 2025
- International Journal of Research in Modern Engineering & Emerging Technology
Automating Cloud-Based Expense Tracking Solutions Using Concur and Microservice Architectures
- Research Article
- 10.3390/jcp5040078
- Oct 1, 2025
- Journal of Cybersecurity and Privacy
- Edi Marian Timofte + 6 more
Cyber-physical infrastructures such as hospitals and smart campuses face hybrid threats that target both digital and physical domains. Traditional security solutions separate surveillance from network monitoring, leaving blind spots when attackers combine these vectors. This paper introduces ARGUS, an autonomous robotic platform designed to close this gap by correlating cyber and physical anomalies in real time. ARGUS integrates computer vision for facial and weapon detection with intrusion detection systems (Snort, Suricata) for monitoring malicious network activity. Operating through an edge-first microservice architecture, it ensures low latency and resilience without reliance on cloud services. Our evaluation covered five scenarios—access control, unauthorized entry, weapon detection, port scanning, and denial-of-service attacks—with each repeated ten times under varied conditions such as low light, occlusion, and crowding. Results show face recognition accuracy of 92.7% (500 samples), weapon detection accuracy of 89.3% (450 samples), and intrusion detection latency below one second, with minimal false positives. Audio analysis of high-risk sounds further enhanced situational awareness. Beyond performance, ARGUS addresses GDPR and ISO 27001 compliance and anticipates adversarial robustness. By unifying cyber and physical detection, ARGUS advances beyond state-of-the-art patrol robots, delivering comprehensive situational awareness and a practical path toward resilient, ethical robotic security.
- Research Article
- 10.1016/j.infsof.2025.107808
- Oct 1, 2025
- Information and Software Technology
- Nuha Alshuqayran + 2 more
A model-driven architecture approach for recovering microservice architectures: Defining and evaluating MiSAR
- Research Article
- 10.47760/cognizance.2025.v05i09.012
- Sep 30, 2025
- Cognizance Journal of Multidisciplinary Studies
- Arun Ganapathi
This article examines the essential competencies and strategic approaches required to develop a successful career in Cloud-Native Identity and Access Management (IAM). Through analysis of industry trends, technical skill requirements, protocol specializations, and professional development pathways, a comprehensive framework is established for practitioners seeking to advance in this specialized field. The convergence of cloud platforms, containerization technologies, and microservices architectures has fundamentally transformed identity management implementation paradigms, creating both challenges and opportunities for security professionals. By developing robust technical foundations, mastering specialized protocols, building compelling portfolios, and pursuing continuous learning, practitioners can position themselves effectively in this rapidly evolving domain.
- Research Article
- 10.63412/kb44xf51
- Sep 30, 2025
- International Journal of Global Innovations and Solutions
- Akshay Mittal
The rapid enterprise adoption of multi-cloud, microservice architectures introduces unprecedented complexity and security challenges. Traditional, reactive security models are proving inadequate, as code changes can propagate to global production systems within minutes, leaving minimal time for after-the-fact audits. Existing security solutions often operate in silos, failing to provide a coordinated and autonomous defense posture capable of addressing threats that span heterogeneous cloud environments. This paper introduces a novel framework for autonomous, cross-cloud threat mitigation that utilizes Multi-Agent Reinforcement Learning (MARL). In our proposed system, lightweight, self-defending artificial intelligence agents are deployed within each cloud environment to act as intelligent sentinels inside the software-delivery pipeline. These agents learn collaboratively to identify and remediate security risks in real-time, functioning as self-healing remediation agents. Through simulated multi-cloud failure scenarios, we demonstrate that this approach can significantly reduce mean-time-to-resolution for security incidents, projecting improvements comparable to the 60\% reduction in vulnerability patch time observed in related empirical studies.
- Research Article
- 10.37547/tajet/volume07issue09-17
- Sep 30, 2025
- The American Journal of Engineering and Technology
- Oleksandr Tserkovnyi
This paper discusses the practical and feature gaps that were encountered with Google Document AI in building the AI product at TrialBase platform (ai.trialbase.com), which automates legal document analysis. Results matter because there is an explosion of electronic legal documents that require fast and reliable parsing, which is essential for systems based on LLMs and retrieval-augmented generation. Standard Document AIs seldom work well in practice, even if there are no damaged PDFs, and if a large dataset is being used, wherein the API quota is not hit, and processing costs do not matter. The architecture proposed in this paper is robust, efficient at transforming various documents into structured data. Event-driven microservice architecture with message queues and a PDF sanitization pipeline solves real-world problems by enabling ProcessorPool (multiple processors using synchronous Document AI API to go beyond quota limitation concurrently drastically reducing processing times). Pre-sanitization, coupled with asynchronous batch processing and a custom load balancer, got a tenfold speed increase with enhanced reliability over real-world legal documents. The article is meant to help LegalTech researchers and practitioners, workflow developers, and engineers working on high-performance, reliable Google Cloud-based projects.
- Research Article
- 10.52783/jisem.v10i60s.13278
- Sep 30, 2025
- Journal of Information Systems Engineering and Management
- Krishna Dornala
The management of auctioned lease returns in automotive fleet operations presents significant challenges including fragmented data architectures, static decision-making processes, and limited scalability. This paper proposes a comprehensive digital framework that leverages artificial intelligence, Internet of Things sensors, and distributed ledger technology to address these inefficiencies. The framework employs predictive analytics for demand forecasting, dynamic algorithms for fleet allocation, and real-time optimization for routing decisions. Built on cloud-native microservices architecture, the system integrates vehicle telematics, blockchain-based transaction verification, and automated reconciliation processes to create an end-to-end solution. The proposed framework demonstrates potential for substantial improvements in operational efficiency, including reduced delivery times, decreased manual intervention requirements, and optimized resource utilization. Additionally, the system provides enhanced transparency through immutable transaction records and contributes to environmental sustainability through route optimization that minimizes fuel consumption and carbon emissions. By transitioning from reactive to proactive fleet management, this framework offers a strategic approach to transforming automotive logistics operations, positioning organizations to achieve competitive advantages through data-driven decision-making and intelligent automation.
- Research Article
- 10.14421/jiska.2025.10.3.341-350
- Sep 30, 2025
- JISKA (Jurnal Informatika Sunan Kalijaga)
- Alam Rahmatulloh + 2 more
The implementation of environmentally friendly campus concepts has become increasingly crucial in addressing global environmental challenges. Eco-Maps is an application designed to visualize and manage sustainability efforts on campus, including energy management, waste management, and sustainable transportation initiatives. To enhance efficiency and flexibility, this study discusses the application of a microservice architecture in Eco-Maps. This architecture supports faster and more efficient development, testing, and deployment, while enabling horizontal scalability to manage high complexity and large data volumes. By separating application functions into independent services, microservices facilitate maintenance and updates while minimizing the impact of failures in individual services. This study also reviews the integration of containerization technologies, such as Docker and Kubernetes, to support microservice implementation. Through these technologies, the application can be deployed quickly and consistently across various environments, from development to production. System testing was conducted using load testing and stress testing methods, as shown in Tables 3 and 4. The results demonstrate that the average response time across ten iterations was 745.9 ms, with an average CPU usage of 44.38%. These findings confirm that processing load directly affects CPU efficiency and overall system performance.
- Research Article
- 10.52783/jisem.v10i60s.13060
- Sep 30, 2025
- Journal of Information Systems Engineering and Management
- Anwar Ahmad
The emergence of enterprise AI architect roles represents a significant development in organizational technology leadership, inspired by the transformative ability of artificial intelligence in industry sectors. This article examines the versatile efficiency required for success in Enterprise AI Architecture, involving technical expertise in machine learning lifestyle management, data engineering abilities, infrastructure optimization, and model lecturer framework. Strategic leadership dimensions include cross-functional cooperation skills, communication proficiency for diverse stakeholder engagement, management expertise in change, and the ability to align AI initiatives with broader business objectives. The discussion examines installed design patterns, including microservice architecture, event-powered systems, and a model serving framework that enable scalable AI finance. Operational ideas address demonstrations address monitoring systems, addressing AI-specific weaknesses, outlines of governance for regulatory compliance, and disaster recovery schemes for mission-critical applications. The findings suggest that organizations with dedicated AI Architecture Leadership achieve better implementation results, including increased regulatory compliance, operational cost reduction, time-to-market improvement, and high stakeholder satisfaction ratings, than traditional technology deployment approaches.
- Research Article
- 10.63391/3292f8
- Sep 30, 2025
- INTERNATIONAL INTEGRALIZ SCIENTIFIC
- Raphael Barbosa Vieira Louzada Neumann
The modernization of legacy systems represents one of the greatest challenges in the digital transformation of companies, especially in industrial sectors where operational continuity and efficiency are crucial. This article aims to analyze how the adoption of Kubernetes can facilitate the modernization of legacy applications, promoting the transition to a more scalable, resilient, and agile microservices architecture. The methodology employed was qualitative, with a case study that investigated the migration process of a monolithic application to Kubernetes, addressing the technical and organizational challenges faced during implementation. The results indicate that the modernization led to significant improvements in scalability, operational cost reduction, and greater flexibility in delivering new services. The research concludes that, in addition to being an effective technical solution, Kubernetes enables a cultural transformation within organizations by promoting a more agile and decentralized approach to software development.