Year Year arrow
arrow-active-down-0
Publisher Publisher arrow
arrow-active-down-1
Journal
1
Journal arrow
arrow-active-down-2
Institution Institution arrow
arrow-active-down-3
Institution Country Institution Country arrow
arrow-active-down-4
Publication Type Publication Type arrow
arrow-active-down-5
Field Of Study Field Of Study arrow
arrow-active-down-6
Topics Topics arrow
arrow-active-down-7
Open Access Open Access arrow
arrow-active-down-8
Language Language arrow
arrow-active-down-9
Filter Icon Filter 1
Year Year arrow
arrow-active-down-0
Publisher Publisher arrow
arrow-active-down-1
Journal
1
Journal arrow
arrow-active-down-2
Institution Institution arrow
arrow-active-down-3
Institution Country Institution Country arrow
arrow-active-down-4
Publication Type Publication Type arrow
arrow-active-down-5
Field Of Study Field Of Study arrow
arrow-active-down-6
Topics Topics arrow
arrow-active-down-7
Open Access Open Access arrow
arrow-active-down-8
Language Language arrow
arrow-active-down-9
Filter Icon Filter 1
Export
Sort by: Relevance
  • Open Access Icon
  • Research Article
  • 10.5753/jisa.2026.5922
Leveraging zero trust and risk indicators to support continuous vulnerability compliance
  • Jan 21, 2026
  • Journal of Internet Services and Applications
  • Diego Gama + 4 more

Open source dependencies are the leading source of vulnerabilities in applications and are often exploited in software supply chain attacks. Efforts to assess vulnerabilities are employed during DevSecOps pipelines in order to keep a system compliant with security regimes. However, current strategies for continuous compliance are limited to preventing issues before deployment, and thus do not address changes in dynamic aspects such as newfound vulnerabilities, let alone how to respond to such incidents. In this work, we leverage zero-trust to enable continuous, post-deployment vulnerability compliance assessment, isolating workloads that fail to meet a minimum security posture. This approach balances exploitation prevention with application availability --- a fundamental trade-off for critical use cases. The solution is built on top of SPIRE, a robust open-source identity provider based on workload attestation, and implements a custom plugin that responds to compliance violations driven by dynamic aspects exposed by OWASP's Dependency Track, an open-source tool for monitoring software components and their dependencies for vulnerabilities. To enhance flexibility in the security-availability trade-off, we introduce a grace period mechanism, enabling organizations to defer enforcement of newly identified vulnerabilities based on workload criticality, thus supporting availability for non-critical workloads without compromising long-term security. Finally, we evaluate the performance impact of this approach on a SPIRE environment, showing that the added resource usage reliably remains within the recommended 16 GiB of RAM and 4 vCPUs to run Dependency Track in production. We also show that the plugin adds less than 6 seconds of latency to the attestation process, which is insignificant given its default frequency of twice per hour. Moreover, the results confirm that the approach successfully prevents vulnerability exploitation by prioritizing security, while enabling controlled flexibility in less critical contexts.

  • Open Access Icon
  • Journal Issue
  • 10.5753/jisa.2025
  • Jan 9, 2026
  • Journal of Internet Services and Applications

  • Open Access Icon
  • Research Article
  • 10.5753/jisa.2025.5933
Contextual CVSS Scoring Accounting for Vulnerability Batches
  • Dec 16, 2025
  • Journal of Internet Services and Applications
  • Lucas Guimarães Miranda + 7 more

Software vulnerabilities are intrinsically related to product characteristics. The properties of a vulnerability, along with its severity, must be assessed in the context of the product wherein the vulnerability is located. In this paper, our goal is to determine how context impacts severity. To this aim, we pose the following questions: 1) How do different sources statistically differ in the way they parametrize severity? 2) Are there latent patterns that can be learned to determine how context impacts severity? 3) How do vulnerability batches shape scoring practices across sources? To answer these questions, we leverage public data from the National Vulnerability Database (NVD). By comparing CVSS ratings reported by different sources, we provide insights into how scores are parametrized considering contextual factors. For the first question, we show that Industrial Control System (ICS) products tend to have higher attack complexity and more restrictive attack vectors than their general counterparts. For the second, we show that a Large Language Model, CVSS-BERT, can learn context-specific CVSS scores from vulnerability descriptions, achieving F1 scores above 90% and enabling knowledge transfer across sources. For the third, we show that while NVD often assigns uniform scores within a batch, CNAs introduce context-specific variations. These findings highlight the importance of context in assessing severity and suggest the feasibility of semi-automated, batch-aware vulnerability assessments.

  • Open Access Icon
  • Research Article
  • 10.5753/jisa.2025.6112
Analysis of Computational Resource Consumption of an Intrusion Detection System Based on Containerized Network Functions Virtualization
  • Dec 12, 2025
  • Journal of Internet Services and Applications
  • Lucas Teles De Oliveira + 3 more

The rapid expansion of global telecommunications networks has driven a continuous increase in Internet adoption, requiring telecom companies to deploy scalable services efficiently to accommodate new users. At the same time, the constant pursuit of cost reduction and improved service delivery has highlighted the need to enhance network function performance. Network Function Virtualization (NFV) addresses these demands by replacing costly, dedicated hardware with virtualized network functions running on virtual machines or containers. This approach enables better resource allocation, scalability, and cost reduction. While traditional virtualization methods can be slow and resource-intensive, container-based solutions, such as those offered by Docker, provide a more lightweight and efficient alternative. By reducing virtualization overhead through kernel sharing, containers significantly streamline the deployment and scalability of NFV-based services. Alongside this evolution, the expansion of online services has brought a surge in cybersecurity threats, highlighting the urgent need for Intrusion Detection Systems (IDS) capable of monitoring traffic patterns and detecting malicious activity in real time. This paper presents a modular testbed framework for NFV-based IDS evaluation, deploying Snort in Docker containers and comparing computational resource consumption against a traditional virtual machine (VM) implementation. The framework enables dynamic instantiation, scalability, and efficient orchestration of IDS components, providing a practical environment to study how different virtualization strategies impact system performance. Specifically, our study i) evaluates the performance of the NFV-IDS running on both a VM and a Docker container, and ii) tests NFV-IDS alongside an Nginx web server under cyberattack. The results provide insights into the viability of containerized NFV for IDS deployment, particularly in environments that demand lightweight, dynamic, and resource-efficient security infrastructures. Furthermore, the framework provides a foundation for future experiments incorporating alternative detection engines, traffic profiles, or virtualization strategies.

  • Open Access Icon
  • Research Article
  • 10.5753/jisa.2025.5914
Composing State Machine Replication
  • Dec 4, 2025
  • Journal of Internet Services and Applications
  • Caroline Martins Alves + 3 more

High availability is a fundamental requirement in large-scale distributed systems, where replication strategies are central in keeping applications operational despite a bounded number of failures. State Machine Replication (SMR) is one of the most widely adopted approaches for implementing highly available, fault-tolerant services, as it increases uptime while ensuring strong consistency. In recent years, research on SMR has yielded numerous variations tailored to enhance resilience, performance, and scalability. In this paper, we revisit SMR from a new perspective by introducing Composing State Machine Replication (CSMR), a method that enables fault-tolerant service composition. By composing SMRs, we promote the reuse of existing services to construct more complex and reliable systems. This modular approach fosters loosely coupled, flexible architectures, contributing to the theoretical foundations of SMR and aligning with common development practices in cloud computing and microservices. We formally define CSMR and demonstrate how composition can be used to extend existing SMR specifications with new features. For example, CSMR allows the semantics of a service operation to be extended by enabling different state machine replicas to execute complementary steps of the same operation. Additionally, SMR composition facilitates sharding and state partitioning by assigning disjoint state variables to separate SMRs. Beyond formalization, the paper provides illustrative examples of CSMR and introduces a high-level CSMR architecture that highlights the essential components, their responsibilities, and their interactions in supporting the composition process. To further demonstrate practicability, we present an API for building CSMR systems that combines RPC-based communication with declarative configuration in YAML format.

  • Open Access Icon
  • Research Article
  • 10.5753/jisa.2025.5154
Empowering Client Selection with Local Knowledge Distillation for Efficient Federated Learning in Non-IID Data
  • Oct 2, 2025
  • Journal of Internet Services and Applications
  • Aissa Hadj Mohamed + 4 more

Federated Learning (FL) is a distributed approach in which multiple devices collaborate to train a shared, global model (GM). During its training, client devices must frequently communicate their gradients to the central server to update the GM weights. This incurs significant communication costs (bandwidth utilization and the number of messages exchanged). The heterogeneous nature of clients’ local datasets poses an extra challenge to the model training. In this sense, we introduce FedSeleKDistill, Federated Selection and Knowledge Distillation Algorithm, to decrease the overall communication costs. FedSeleKDistill is an innovative combination of: (i) client selection, and (ii) knowledge distillation approaches with three main objectives: (i) reducing the number of devices training at every round; (ii) decreasing the number of rounds until convergence; and (iii) mitigating the effect of client’s heterogeneous data on the GM effectiveness. In this paper, we extend the results obtained from the initial paper presenting FedSeleKDistill. The additional experimental evaluations on the MNIST and German Traffic Signs Benchmark datasets demonstrate that FedSeleKDistill is highly efficient in training the GM until convergence in heterogeneous FL. FedSeleKDistill reaches a higher accuracy score and faster convergence than state-of-the-art models. Our results also show higher performance when analyzing the accuracy scores on the clients’ local datasets.

  • Open Access Icon
  • Research Article
  • 10.5753/jisa.2025.5495
Querying large video datasets: a systematic literature review
  • Oct 2, 2025
  • Journal of Internet Services and Applications
  • Clayton Kossoski + 2 more

Querying large-scale video datasets differs from querying short videos due to the inherent challenges in volume, velocity, and variety. In the last decade, this area has emerged thanks to the effectiveness of deep learning methods, new graphics processing units, new video databases, advances in distributed computing, among others. The main goal of querying video streams is to find the best balance between available hardware, software resources, and query latency, taking into account quality goals, constraints, and video configurations. Due to these challenges, many development methods, frameworks, and evaluation metrics have been proposed. As a result, this systematic literature review addresses a gap in the current body of knowledge. It covers ten years, from 2014 to 2024, and 4,248 papers, of which 99 were identified as relevant and used to answer the research questions on (i) processing methods, hardware architecture, and software, (ii) query languages, (iii) evaluation metrics, (iv) and available datasets. In addition, this review shows how this niche is promising and concerned with the rational use of available resources. Among the results, the following are highlighted: cheap detection models are very popular, smart IoT devices are very useful, distributed computing for video query applications is complex, system latency is essential, and there is no standard video query language. Current trends include the development of a standard video query language, in-memory computing, processing where data is produced, low-latency processing, and active learning for labeling objects. This original work shows a domain perspective, identifies problems and opportunities, and provides directions for future studies.

  • Open Access Icon
  • Research Article
  • Cite Count Icon 2
  • 10.5753/jisa.2025.5055
Plant Disease Detection Using Federated Learning and Cloud Infrastructure for Scalability and Data Privacy
  • Sep 1, 2025
  • Journal of Internet Services and Applications
  • Paulo V Caminha + 1 more

Agriculture faces significant challenges from crop diseases, making early and accurate detection critical. Federated Learning (FL), an advancement in artificial intelligence (AI) and machine learning (ML), presents a promising solution by enabling collaborative model training on decentralized data without the need to share sensitive information. This article examines the application of FL in detecting plant diseases through image analysis, highlighting the role of cloud computing in addressing challenges related to data processing, storage, and model scalability. By leveraging decentralized data stored and processed in the cloud, FL develops robust models that not only improve detection accuracy but also generalize effectively to new data, promoting knowledge sharing while ensuring data privacy. The integration of cloud infrastructure enables FL to scale, providing resilience and productivity gains in agricultural practices. The results show that the proposed approach achieves a 99.71% accuracy using the VGG16 model after Federated Learning aggregation, while preserving data confidentiality, enhancing agricultural resilience, and benefiting from the scalability and flexibility offered by cloud computing.

  • Open Access Icon
  • Research Article
  • Cite Count Icon 1
  • 10.5753/jisa.2025.5474
Spectrum Defragmentation Window in SDM-EON Networks
  • Aug 13, 2025
  • Journal of Internet Services and Applications
  • Paulo José De Souza Júnior + 2 more

Space division multiplexing (SDM) technology expands the capacity of elastic optical networks (EONs) by adding spatial dimensions, positioning SDM-EONs as a strong candidate for future high-throughput infrastructures. However, SDM introduces new challenges, especially vertical fragmentation, where frequency slots become misaligned across multiple cores. This fragmentation decreases spectral efficiency, reduces resource availability, and increases connection blocking. This work proposes WDefrag, a novel RMSCA algorithm that tackles these issues through Slot Window Defragmentation, an original strategy developed in this study. WDefrag segments the spectrum into cost-evaluated windows and identifies regions where fragmentation most severely limits allocation. The algorithm reallocates resources locally, avoids unnecessary disruptions, and improves spectrum organization while managing crosstalk and fragmentation in both spatial and spectral dimensions. WDefrag operates in both proactive and reactive modes and adjusts window sizes to match traffic dynamics. Simulations compare it against non-defragmenting and state-of-the-art approaches. WDefrag outperforms these baselines by up to 30% in bandwidth blocking reduction, particularly in proactive scenarios. By applying cost-aware decisions and prioritizing fragmented regions that limit connectivity, WDefrag enhances spectrum utilization and delivers consistent performance improvements under real network demands.

  • Open Access Icon
  • Research Article
  • 10.5753/jisa.2025.5076
Performance Evaluation of a Camera Surveillance System in Smart Buildings Using Queuing Models
  • Aug 11, 2025
  • Journal of Internet Services and Applications
  • Lucas Silva Lopes + 6 more

Security is increasingly prioritized, driving the use of camera surveillance in various settings such as companies, schools, and hospitals. Cameras deter crime and enable continuous monitoring. Integrating Edge and Fog Computing into these systems decentralizes data processing, allowing for faster responses to critical events. Challenges in deploying such systems include high costs, complex technology integration, and precise sizing. Costs cover cameras, Edge devices, cabling, and software, while integration requires technical expertise and time. Accurate sizing is essential to prevent resource under- or over-utilization. Analytical modeling helps simulate scenarios and calculate needed resources. This work proposes an M/M/c/K queuing model to assess surveillance system performance in smart buildings, considering data arrival rates and Edge and Fog container capacities. The model allows parameter customization to analyze various scenarios. Results show that increasing the number of containers more significantly improves system performance than increasing the number of cores per container.