• All Solutions All Solutions Caret
    • Editage

      One platform for all researcher needs

    • Paperpal

      AI-powered academic writing assistant

    • R Discovery

      Your #1 AI companion for literature search

    • Mind the Graph

      AI tool for graphics, illustrations, and artwork

    • Journal finder

      AI-powered journal recommender

    Unlock unlimited use of all AI tools with the Editage Plus membership.

    Explore Editage Plus
  • Support All Solutions Support
    discovery@researcher.life
Discovery Logo
Sign In
Paper
Search Paper
Cancel
Pricing Sign In
  • My Feed iconMy Feed
  • Search Papers iconSearch Papers
  • Library iconLibrary
  • Explore iconExplore
  • Ask R Discovery iconAsk R Discovery Star Left icon
  • Chat PDF iconChat PDF Star Left icon
  • Chrome Extension iconChrome Extension
    External link
  • Use on ChatGPT iconUse on ChatGPT
    External link
  • iOS App iconiOS App
    External link
  • Android App iconAndroid App
    External link
  • Contact Us iconContact Us
    External link
Discovery Logo menuClose menu
  • My Feed iconMy Feed
  • Search Papers iconSearch Papers
  • Library iconLibrary
  • Explore iconExplore
  • Ask R Discovery iconAsk R Discovery Star Left icon
  • Chat PDF iconChat PDF Star Left icon
  • Chrome Extension iconChrome Extension
    External link
  • Use on ChatGPT iconUse on ChatGPT
    External link
  • iOS App iconiOS App
    External link
  • Android App iconAndroid App
    External link
  • Contact Us iconContact Us
    External link

Related Topics

  • Hadoop Distributed File System
  • Hadoop Distributed File System
  • Parallel File System
  • Parallel File System
  • Cluster File System
  • Cluster File System
  • Distributed File
  • Distributed File
  • Parallel File
  • Parallel File
  • Cluster File
  • Cluster File

Articles published on Distributed File System

Authors
Select Authors
Journals
Select Journals
Duration
Select Duration
1172 Search results
Sort by
Recency
  • New
  • Research Article
  • 10.54580/r0702.09
Análise comparativa de desempenho dos sistemas de arquivos NFS e SMB/CIFS integrados ao DFS do windows server
  • Nov 25, 2025
  • Revista Angolana de Ciencias
  • Adilson José Da Silva Silvério + 1 more

Distributed file systems (DFS) are technological resources that enable secure and efficient information sharing and access within a network infrastructure. Since the adoption of these types of systems is vital for an institution, and considering the importance of choosing the right protocol for infrastructure reliability and performance, this study aims to comparatively analyze the operation and performance of Network File System (NFS) and Server Message Block (SMB)/Common Internet File System (CIFS) file systems integrated with the Windows Server DFS service, analyzing variables such as transfer rate, CPU (Central Processing Unit) utilization, and threads under different workloads. The test environment was designed based on the network infrastructure of Katyavala Bwila University, located in Benguela province, Angola, which uses a client-server architecture. Based on the results presented, the system's responsiveness was determined for different operations (write, rewrite, read, reread, and others) on files and records of different sizes. Similar performance was observed between the protocols in intensive load tests, with minor variations in throughput for specific operations. This information provides relevant information for network administrators, experts, and the scientific community in defining file sharing policies and choosing the most appropriate protocol for deployments in DFS environments.

  • Research Article
  • 10.4108/eetsis.9027
The Cutting-Edge Hadoop Distributed File System: Un-leashing Optimal Performance
  • Oct 13, 2025
  • ICST Transactions on Scalable Information Systems
  • Anish Gupta + 5 more

Despite the widespread adoption of 1000-node Hadoop clusters by the end of 2022, Hadoop implementation still encounters various challenges. As a vital software paradigm for managing big data, Hadoop relies on the Hadoop Distributed File System (HDFS), a distributed file system designed to handle data replication for fault tolerance. This technique involves duplicating data across multiple DataNodes (DN) to ensure data reliability and availability. While data replication is effective, it suffers from inefficiencies due to its reliance on a single-pipelined paradigm, leading to time wastage. To tackle this limitation and optimize HDFS performance, a novel approach is proposed, utilizing multiple pipelines for data block transfers in-stead of a single pipeline. Additionally, the proposed approach incorporates dynamic reliability evaluation, wherein each DN updates its reliability value after each round and sends this information to the NameNode (NN). The NN then sorts the DN based on their reliability values. When a client requests to upload a data block, the NN responds with a list of high-reliability DN, ensuring high-performance data transfer. This proposed approach has been fully implemented and tested through rigorous experiments. The results reveal significant improvements in HDFS write operations, providing a promising solution to overcome the challenges associated with traditional HDFS implementations. By leveraging multiple pipelines and dynamic reliability assessment, this approach enhances the overall performance and responsiveness of Hadoop's distributed file system.

  • Research Article
  • 10.12732/ijam.v38i5s.302
DEVELOP AND ANALYZE DYNAMIC HEALTH-BASED ELECTION ALGORITHM FOR ROLE PROMOTION IN HADOOP HDFS
  • Oct 8, 2025
  • International Journal of Applied Mathematics
  • Shailesh Hule

The Hadoop Distributed File System (HDFS) is a key part of scalable data storage. However, standard methods for choosing Active and Standby NameNodes roles often rely on static configurations or close supervision by hand, which makes fault recovery and load balancing harder. This study solves the problem of these methods not working well by creating and studying a dynamic, health-based voting algorithm for smart job promotion in Hadoop HDFS. The objective of the work is to make a system that can change and select the best node for promotion based on the health of resources at any given time. Using normalized values of CPU usage, RAM availability, network bandwidth, and uptime, the proposed method creates a composite health score. Each node is then ranked based on its final score. In the study the finding demonstrate that the DataNode DN1 got the best score of 0.7655, beating out DN2 (0.1189) and DN3 (0.6), so it was promoted to Active NameNode. This finding proves that the algorithm can choose the most stable node on the fly, which lessens the effects of a split and makes the system more resilient. The results show that our smart voting method greatly enhances fault tolerance and cluster performance by making sure that the best roles are assigned when node conditions change.

  • Research Article
  • 10.4018/ijisscm.389716
Enhancing CAD Data Integrity and Security in Supply Chain Networks Using Blockchain
  • Sep 26, 2025
  • International Journal of Information Systems and Supply Chain Management
  • Chengnan Li + 4 more

Ensuring the integrity and security of CAD design data is critical in digital supply chains, where centralized systems face risks of tampering, unauthorized access, and transmission vulnerabilities. This study proposes a blockchain-based framework to enhance data integrity, traceability, and network security in CAD environments. XML is used to standardize design data, which is encrypted and stored in a decentralized manner using blockchain and distributed file systems, while smart contracts enforce access control and validation. Experimental results show that the approach effectively prevents data tampering, ensures secure information sharing, and improves transmission efficiency—making it a promising solution for secure collaboration across supply chain ecosystems. The framework supports version control and auditability of design changes, enhancing transparency among distributed engineering teams. By integrating blockchain with CAD workflows, the system strengthens trust and data reliability in digital product development within complex supply networks.

  • Research Article
  • 10.14419/m46fn971
Hybrid Encryption for Fortifying HDFS Data‎
  • Sep 14, 2025
  • International Journal of Basic and Applied Sciences
  • Shivani Awasthi + 1 more

In the big data era, standard encryption methods alone are not suitable for handling massive, high-velocity data, which negatively impacts the performance of a distributed framework. This paper ‎proposes a hybrid encryption (HE) method that integrates the strengths of the two symmetric algo‎rithms (Twofish-256, AES-256) with the Hadoop Map-Reduce framework (MRF) to fortify Hadoop Distributed File System (HDFS) data. This paper offers dual-level encryption (Twofish -> ‎AES) to mitigate the vulnerabilities of standalone encryption while maintaining optimal perfor‎mance. The experiments on datasets from 32-256 MB show encryption speed improvement of over ‎‎5-6%, efficiency gain of over 5%, and throughput of over 6% compared to hybrid approaches such ‎as CP-ABE+AES, AES+RSA, and standalone encryption schemes AES and Twofish. Additionally, ‎the ANOVA test based on encryption and decryption time gives (F = 2.67, p = 0.07) and (F = 9.9, ‎p = 0.0003) outcomes, which show that the proposed HE approach is highly significant in big data ‎environments. Our novel approach balances security and performance, addresses the weaknesses of ‎individual and hybrid encryption algorithms, ensures compatibility in distributed environments, and ‎complies with data protection regulations. This suggested HE approach (Twofish -> AES) complies ‎with GDPR, HIPAA, and PCI-DSS through key management and resistance to side-channel ‎attacks. The results show feasibility in the government and healthcare sectors, where data protection ‎and large dataset processing are critical‎.

  • Research Article
  • 10.1145/3760403
Achieving Both Performance and Reliability in An Asymmetric File System on Disaggregated Persistent Memory
  • Aug 13, 2025
  • ACM Transactions on Storage
  • Miao Cai + 2 more

The ultra-fast persistent memories (PMs) promise a practical solution towards high-performance distributed file systems. This paper examines and reveals a cascade of performance and reliability issues in the current PM provision scheme, which not only underutilizes fast PM devices but also leads to severe consequences, such as throughput degradation, load imbalance, and even service outage. To remedy these, we introduce Ethane+, a rack-scale, distributed file system built on disaggregated persistent memory (DPM). Through resource separation using fast data connection technologies, DPM achieves efficient and cost-effective PM sharing while supporting strong fault isolation. To unleash such hardware potentials, Ethane+ incorporates an asymmetric file system architecture inspired by the imbalanced resource provision feature of DPM. It splits a file system into a control-plane FS and a data-plane FS, and designs these two planes with dual goals of best hardware utilization and hardening file system reliability. Evaluation results demonstrate that Ethane+ reaps the DPM hardware benefits, performs up to 60 × better than modern distributed file systems, resists both software and hardware faults, and improves data-intensive application throughputs by up to 15 ×.

  • Research Article
  • 10.1145/3759441.3759451
Erasure Coding Aware Block Placement for Data-Intensive Applications
  • Aug 4, 2025
  • ACM SIGOPS Operating Systems Review
  • Shadi Ibrahim + 1 more

Erasure Coding (EC) has recently been integrated and deployed in the Hadoop Distributed File System (HDFS) to provide the same fault tolerance guarantees as replication, but with significantly less storage overhead. When EC is used, data reads typically involve only data chunks. In this paper, we study the effect of data chunk distribution on the performance of reads and data-intensive applications, and present the design and evaluation of an erasure coding aware (EC-aware) block placement that balances the distribution of data chunks across nodes. Experimental results show that EC-aware block placement can reduce the execution time of Sort and WordCount applications by up to 25%.

  • Research Article
  • Cite Count Icon 1
  • 10.1016/j.future.2025.107763
ZCeph: Design and implementation of a ZNS-friendly distributed file system
  • Aug 1, 2025
  • Future Generation Computer Systems
  • Jin Yong Ha + 1 more

zCeph: Design and implementation of a ZNS-friendly distributed file system

  • Research Article
  • 10.11591/eei.v14i4.9292
Optimizing cloud infrastructure efficiency through advanced multimedia data deduplication techniques
  • Aug 1, 2025
  • Bulletin of Electrical Engineering and Informatics
  • Mohd Hasan Mohiuddin + 1 more

Organizations worldwide commonly utilize cloud infrastructure to manage large volumes of data, making the optimization of storage crucial for enhancing cloud performance. One effective optimization technique is data deduplication, which identifies duplicate objects and ensures that only one copy of unique data is stored in the cloud. While several deduplication schemes currently exist, there is a pressing need to improve efficiency in cloud storage through innovative approaches. In this paper, we propose a new system model designed to facilitate an efficient deduplication process. Our algorithm, called deduplication in cloud infrastructure (DCI), offers a systematic and effective method for handling deduplication challenges related to redundant data storage. DCI focuses on hash generation, metadata comparison, and pointer-based deduplication, providing a comprehensive strategy for optimizing cloud storage resources and minimizing duplication. This ultimately enhances both the efficiency and cost-effectiveness of cloud-based data management. A simulation study using CloudSim and the Hadoop distributed file system (HDFS) simulator demonstrates that the proposed deduplication method is effective. Experimental results show that our algorithm outperforms many existing solutions, achieving the highest deduplication ratio of 6.7 and saving 85.09% of storage space due to its efficient deduplication approach. The proposed system can be used in cloud infrastructures for efficiency.

  • Research Article
  • 10.30857/2786-5371.2025.2.2
Impact of log file processing on learning speed and defect classification accuracy
  • Jul 31, 2025
  • Technologies and Engineering
  • Anton Kaiafiuk

The purpose of the study was to investigate the effect of automatic testing log file preprocessing on the speed of vectorisation and training of machine learning models. The HDFS_v3_TraceBench set was used, which contains more than 370 thousand traces collected in the Hadoop Distributed File System Environment. Processing included noise removal, lemmatisation, and duplication reduction. The data was vectorised using the Term frequency – inverse document frequency method, and then the RandomForestClassifier model was trained. The experimental results showed that optimising the input data reduced the total processing time by almost five times. The time required for text vectorisation and model training has been reduced, which helped to speed up work with large volumes of logs. However, the classification accuracy was not only preserved, but also showed a slight improvement: the F1-score and Matthews correlation coefficient indicators remained consistently high. There was also a decrease in the Log Loss value, which indicated an increase in the model’s confidence in its own forecasts. This is especially important in the context of unbalanced classes that are characteristic of defect classification problems. A detailed analysis showed that a significant part of the service and repetitive information in the logs is not critical for training the model, and its removal, on the contrary, improves the quality of data preparation. In the course of the study, it was also confirmed that the resulting target labels for logs correspond to typical error classes. Implemented log file processing not only reduces computational costs, but also supports or improves the quality of forecasting. These results confirmed the feasibility of including the log cleaning and optimisation step in the overall process of building machine learning models for automated testing. The results obtained can be integrated into automated pipelines for classifying defects and generating bug reports. This will help to reduce the amount of manual labour and increase the efficiency of teams

  • Research Article
  • 10.7494/csci.2025.26.si.7071
DEVELOPING ARTIFICIAL INTELLIGENCE IN THE CLOUD: THE AI INFN PLATFORM
  • Jul 29, 2025
  • Computer Science
  • Rosa Petrini

The INFN CSN5-funded project AI INFN (“Artificial Intelligence at INFN”) aims to promote ML and AI adoption within INFN by providing comprehensive support, including state of-the-art hardware and cloud-native solutions within INFN Cloud. This facilitates efficient sharing of hardware accelerators without hindering the institute’s diverse research activities. AI INFN advances from a Virtual-Machine-based model to a flexible Kubernetes-based platform, offering features such as JWT-based authentication, JupyterHub multitenant interface, distributed file system, customizable conda environments, and specialized monitoring and accounting systems. It also enables virtual nodes in the cluster, offloading computing payloads to remote resources through the Virtual Kubelet technology, with InterLink as provider. This setup can manage workflows across various providers and hardware types, which is crucial for scientific use cases that require dedicated infrastructures for different parts of the workload. Results of initial tests to validate its production applicability, emerging case studies and integration scenarios are presented.

  • Research Article
  • 10.4114/intartif.vol28iss76pp124-148
Distributed two phase intrusion detection system using machine learning techniques and underlying big data storage and processing architecture- HDFS
  • Jul 10, 2025
  • Inteligencia Artificial
  • Abhijit Dnyaneshwar Jadhav + 4 more

It is crucial for organizations to secure their data in the internet era. The use of Intrusion Detection Systems (IDS) implies this security. Several researchers used various tools and methods to implement various IDS models. However, a few performance concerns that must be resolved are crucial from a security standpoint. The problems pertain to the IDS time efficiency referred as timeliness, accuracy as well as the fault tolerance. The proposed model of intrusion detection has two phases of detection. Every phase uses a different set of machine learning algorithms. Phase I employs Support Vector Machine (SVM) and k nearest neighbor (kNN), whereas Phase II uses Decision Tree and Naïve Bayes. This two phase detection takes care of reducing false positives and false negatives. To compensate the execution time of these four techniques, the big data environment—Hadoop Distributed File System (HDFS)—is utilized as the underlying storage and processing structure. With such arrangement of two phases, the model gives accuracy of 97.29% overall for known and unknown attacks. For known attacks it gives 99.49% and for unknown attacks it gives 96.28% accuracy in detecting intrusion. Also, the time efficiency is measured for training and testing of the model, for training with 10,000 records, it took 0.7 seconds which is very efficient as considered to existing systems. The detailed performance achievements are discussed in results section. Also, because of HDFS, it becomes distributed and fault tolerant intrusion detection system.

  • Research Article
  • 10.25045/jpit.v16.i2.05
OPTIMIZATION OF ACCESS TO STATIC DATA IN DISTRIBUTED SYSTEMS: A KUBERNETES-BASED SOLUTION WITH POSTGRESQL AND DJANGO
  • Jun 30, 2025
  • Problems of Information Technology
  • Nail Mammadov + 2 more

Static data is a crucial component in distributed systems, ensuring the seamless operation of various components. However, achieving reliable and high-frequency access to such data poses challenges due to its heterogeneous structure. This paper introduces a Kubernetesorchestrated Static Data Database that integrates advanced technologies and best practices to address these challenges effectively. The proposed system leverages the Django framework and PostgreSQL database, optimized with advanced features such as JSONB indexing and metadata flexibility. These technologies are not only widely adopted in the industry for their proven scalability and efficiency but also incorporate cutting-edge academic innovations, such as JSONB-based metadata management and modular database schemas, to enhance flexibility in diverse use cases. Data payloads are stored in a POSIXcompliant distributed file system to ensure robustness. The service is containerized and deployed using Helm on the Kubernetes platform, with OKD serving as the deployment environment to achieve scalability and operational efficiency. This deployment model reflects industrial standards for cloud-native applications while demonstrating the practical applicability of academic research on container orchestration and resource optimization. Through extensive testing, the system demonstrated significant scalability, reliability, and performance improvements under high-demand scenarios. The testing process was designed to mirror real-world industrial workloads, such as high-frequency data access and concurrent queries while validating academic hypotheses on system behavior under extreme conditions. The results validate its potential to meet the growing needs of modern distributed systems, offering a scalable and future-ready solution firmly grounded in academic research and industrial practice.

  • Research Article
  • 10.33093/jiwe.2025.4.2.21
Exploring Big Data Management Approaches and Applications: A Case Study of Real-Time Data Analytics in Air Traffic Management
  • Jun 14, 2025
  • Journal of Informatics and Web Engineering
  • Adeel Hashmi + 4 more

The rapid proliferation of digital devices has generated vast amounts of data, presenting significant challenges in collection, processing, and analysis that traditional systems struggle to overcome. This study investigates big data management approaches, explicitly focusing on technologies capable of efficiently handling real-time data at scale. Within the context of Air Operations, we propose a Hadoop-based architecture designed to support the Observe-Orient-Decide-Act (OODA) loop and enhance air traffic management. By leveraging a distributed system deployed on a cloud-based platform, we demonstrate a cost-effective solution for optimised data processing and improved decision-making capabilities. Our analysis highlights the advantages of using Hadoop's distributed file system (HDFS) for managing both structured and unstructured data generated by various sensors and devices. Additionally, we explore the integration of real-time processing technologies, such as Apache Kafka and Spark, to facilitate timely insights essential for operational effectiveness. Cloud deployment not only enhances resource accessibility but also offers flexibility and scalability, which are crucial for adapting to the dynamic nature of defence operations. We also address critical considerations for security and compliance when handling sensitive military data in cloud environments and recommend strategies to mitigate potential risks. The study concludes with recommendations for addressing future technological needs in big data management, including the incorporation of machine learning for predictive analytics and improved data visualisation tools. By implementing our proposed architecture, the military/ civil aviation can enhance its operational efficiency and decision-making processes, positioning itself to meet future challenges in an increasingly data-driven environment.

  • Research Article
  • 10.11591/ijece.v15i3.pp3439-3448
An intelligent approach to design big data on e-commerce in cloud computing environment
  • Jun 1, 2025
  • International Journal of Electrical and Computer Engineering (IJECE)
  • Salma Syed + 7 more

Web resources extract useful knowledge by the process of web mining. Web server maintains the log files for analyzing them from behavior of customer and improves business as the challenging task for E-commerce companies. The processing and computing of big data was increased day by day by the demand of computer system’s ability. The emphasis on data was increased gradually by the rapid development of information technology. Various businesses are exploring effective data analysis methods, and this system proposes an intelligent approach to designing big data for e-commerce in a cloud computing environment. This paper aims to develop and implement the relevancy vector (RV) algorithm, an innovative page ranking algorithm based on Hadoop distributed file system (HDFS) map reduce. The research provides customers with a robust meta search tool that makes it easy for them to understand personalized search requirements and make purchases based on their preferences. The intelligent meta search system adverse events (IMSS-AE) tool and the RV page ranking algorithm were shown to be efficient and effective by a thorough experimental evaluation in terms of reduced response time, enhanced page freshness, high personalized relevance, and high hit rates.

  • Research Article
  • 10.36548/rrrj.2025.1.009
Distributed Resource Management in Operating Systems: A Case Study on HDFS and YARN
  • Jun 1, 2025
  • Recent Research Reviews Journal
  • Sivaprasath R + 4 more

This research study focuses on analysing the role of distributed resource management in enhancing the scalability and reliability of the linked systems. This study presents a detailed analysis on the architectures, benefits, and inherent drawbacks of the Hadoop Distributed File System (HDFS) and Yet Another Resource Negotiator (YARN). YARN offers flexible resource scheduling through Fair and Capacity schedulers, while HDFS offers fault-tolerant, scalable storage through a block-based, replicated, and locality-optimized design. Although robust, limitations like resource contention in YARN and the Name Node's single point of failure in HDFS still exist. In order to address the evolving challenges in modern computing, this study also explores the potential research domains like serverless architecture for dynamic scaling, latency-conscious edge computing, and AI-based resource forecasting.

  • Research Article
  • 10.32347/st.2025.3.1203
Organization and storage of large datasets for ai training
  • May 14, 2025
  • Smart technologies: Industrial and Civil Engineering
  • Oleksii Melnikov

The article examines current approaches to organizing and storing large datasets used for training artificial intelligence (AI) models, particularly for radio signal recognition tasks. It emphasizes the need for reliable and efficient data storage solutions due to the increasing scale and complexity of data involved in AI applications. The study analyzes various data storage formats (HDF5, WAV, IQ raw data), provides an overview of cloud-based solutions (AWS, Google Cloud, Azure), local storage systems (SAN, NAS, JBOD), and distributed file systems. It discusses best practices in data versioning and cataloging for enhanced accessibility, performance, and reproducibility of AI training processes. Recommendations are provided for selecting optimal storage methods based on the specific requirements of AI projects and the characteristics of the processed data.

  • Research Article
  • 10.11591/ijeecs.v38.i2.pp1256-1264
Enhance big data security based on HDFS using the hybrid approach
  • May 1, 2025
  • Indonesian Journal of Electrical Engineering and Computer Science
  • Fayçal Zine-Dine + 3 more

Hadoop has emerged as a prominent open-source framework for the storage, management, and processing of extensive big data through its distributed file system, known as Hadoop distributed file system (HDFS). This widespread adoption can be attributed to its capacity to provide reliable, scalable, and cost-effective solutions for managing large datasets across diverse sectors, including finance, healthcare, and social media. Nevertheless, as the significance and scale of big data applications continue to expand, the challenge of ensuring the security and safeguarding of sensitive data within Hadoop has become increasingly critical. In this study, the authors introduce a novel strategy aimed at bolstering data security within the Hadoop storage framework. This approach specifically employs a hybrid encryption technique that leverages the advantages of both advanced encryption standard (AES) and data encryption standard (DES) algorithms, whereby files are encrypted in HDFS and subsequently decrypted during the map task. To assess the efficacy of this method, the authors performed experiments with various file sizes, benchmarking the outcomes against other established security measures.

  • Research Article
  • Cite Count Icon 1
  • 10.1108/techs-08-2024-0114
Efficient small file management in Hadoop distributed file system for enhanced e-government services
  • Apr 8, 2025
  • Technological Sustainability
  • Fredrick Ishengoma

PurposeThis paper introduces the Efficient Small File Management Algorithm (ESFMA) to overcome the challenge of small file inefficiency of Hadoop distributed file system (HDFS) for e-government services.Design/methodology/approachESFMA is designed with the following features: hierarchical metadata architecture, caching, block aggregation, prefetching and locality-aware data placement. These are intended to optimize NameNode memory usage, metadata handling, data block management, I/O and network performance. The algorithm was implemented in experiments on HDFS with real e-government small files.FindingsThe experiments showed that ESFMA saves 10% of NameNode memory, 12% of metadata requests, 3.8% of data block use, 15% of read latency, 17% of write latency and 10% of network traffic.Practical implicationsThis study suggests that implementation of ESFMA has the potential to enable better e-government services in HDFS to be run efficiently and effectively.Originality/valueThis paper presents an algorithm for small file management in HDFS, filling an important need in improving service efficiency and performance in e-government services.

  • Open Access Icon
  • Research Article
  • 10.31803/tg-20240111081254
Survey Based on Edge Structured File Systems in Edge Computing
  • Apr 7, 2025
  • Tehnički glasnik
  • Ravula Rajesh + 1 more

Edge Structured File systems (ESFs) are distributed file systems designed to provide efficient and reliable storage solutions in edge computing environments. By employing distributed and decentralized designs, these file systems address specific challenges such as limited resources, inconsistent connectivity, and fluctuating network conditions, resulting in quicker access times, reduced latency, and enhanced resilience. ESFs adapt to dynamic edge settings through flexible data placement tactics, network congestion detection and resolution, and seamless integration with cloud-based storage systems. These techniques enable data portability and, when necessary, the outsourcing of computation-intensive operations to the cloud. Overall, edge-computing ecosystems rely on ESFs to deliver optimal performance, resilience, and data availability. The article discusses several studies on edge-structured file systems, highlighting their features and limitations of previous works. Furthermore, it identifies requirements and discusses research challenges in edge computing, laying the groundwork for future advancements in this rapidly evolving field. By providing insights into the state-of-the-art technologies, features, and limitations of ESFs, as well as their broader implications for edge computing, this article aims to offer valuable guidance to researchers, practitioners, and stakeholders interested in harnessing the full potential of edge computing technologies.

  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • .
  • .
  • .
  • 10
  • 1
  • 2
  • 3
  • 4
  • 5

Popular topics

  • Latest Artificial Intelligence papers
  • Latest Nursing papers
  • Latest Psychology Research papers
  • Latest Sociology Research papers
  • Latest Business Research papers
  • Latest Marketing Research papers
  • Latest Social Research papers
  • Latest Education Research papers
  • Latest Accounting Research papers
  • Latest Mental Health papers
  • Latest Economics papers
  • Latest Education Research papers
  • Latest Climate Change Research papers
  • Latest Mathematics Research papers

Most cited papers

  • Most cited Artificial Intelligence papers
  • Most cited Nursing papers
  • Most cited Psychology Research papers
  • Most cited Sociology Research papers
  • Most cited Business Research papers
  • Most cited Marketing Research papers
  • Most cited Social Research papers
  • Most cited Education Research papers
  • Most cited Accounting Research papers
  • Most cited Mental Health papers
  • Most cited Economics papers
  • Most cited Education Research papers
  • Most cited Climate Change Research papers
  • Most cited Mathematics Research papers

Latest papers from journals

  • Scientific Reports latest papers
  • PLOS ONE latest papers
  • Journal of Clinical Oncology latest papers
  • Nature Communications latest papers
  • BMC Geriatrics latest papers
  • Science of The Total Environment latest papers
  • Medical Physics latest papers
  • Cureus latest papers
  • Cancer Research latest papers
  • Chemosphere latest papers
  • International Journal of Advanced Research in Science latest papers
  • Communication and Technology latest papers

Latest papers from institutions

  • Latest research from French National Centre for Scientific Research
  • Latest research from Chinese Academy of Sciences
  • Latest research from Harvard University
  • Latest research from University of Toronto
  • Latest research from University of Michigan
  • Latest research from University College London
  • Latest research from Stanford University
  • Latest research from The University of Tokyo
  • Latest research from Johns Hopkins University
  • Latest research from University of Washington
  • Latest research from University of Oxford
  • Latest research from University of Cambridge

Popular Collections

  • Research on Reduced Inequalities
  • Research on No Poverty
  • Research on Gender Equality
  • Research on Peace Justice & Strong Institutions
  • Research on Affordable & Clean Energy
  • Research on Quality Education
  • Research on Clean Water & Sanitation
  • Research on COVID-19
  • Research on Monkeypox
  • Research on Medical Specialties
  • Research on Climate Justice
Discovery logo
FacebookTwitterLinkedinInstagram

Download the FREE App

  • Play store Link
  • App store Link
  • Scan QR code to download FREE App

    Scan to download FREE App

  • Google PlayApp Store
FacebookTwitterTwitterInstagram
  • Universities & Institutions
  • Publishers
  • R Discovery PrimeNew
  • Ask R Discovery
  • Blog
  • Accessibility
  • Topics
  • Journals
  • Open Access Papers
  • Year-wise Publications
  • Recently published papers
  • Pre prints
  • Questions
  • FAQs
  • Contact us
Lead the way for us

Your insights are needed to transform us into a better research content provider for researchers.

Share your feedback here.

FacebookTwitterLinkedinInstagram
Cactus Communications logo

Copyright 2025 Cactus Communications. All rights reserved.

Privacy PolicyCookies PolicyTerms of UseCareers