Year Year arrow
arrow-active-down-0
Publisher Publisher arrow
arrow-active-down-1
Journal
1
Journal arrow
arrow-active-down-2
Institution Institution arrow
arrow-active-down-3
Institution Country Institution Country arrow
arrow-active-down-4
Publication Type Publication Type arrow
arrow-active-down-5
Field Of Study Field Of Study arrow
arrow-active-down-6
Topics Topics arrow
arrow-active-down-7
Open Access Open Access arrow
arrow-active-down-8
Language Language arrow
arrow-active-down-9
Filter Icon Filter 1
Year Year arrow
arrow-active-down-0
Publisher Publisher arrow
arrow-active-down-1
Journal
1
Journal arrow
arrow-active-down-2
Institution Institution arrow
arrow-active-down-3
Institution Country Institution Country arrow
arrow-active-down-4
Publication Type Publication Type arrow
arrow-active-down-5
Field Of Study Field Of Study arrow
arrow-active-down-6
Topics Topics arrow
arrow-active-down-7
Open Access Open Access arrow
arrow-active-down-8
Language Language arrow
arrow-active-down-9
Filter Icon Filter 1
Export
Sort by: Relevance
  • New
  • Research Article
  • 10.20998/2522-9052.2026.1.08
RESEARCH OF THE EFFICIENCY MULTISERVICE NETWORKS USING MIMO TECHNOLOGY
  • Jan 26, 2026
  • Advanced Information Systems
  • Elshan Hashimov + 4 more

The presented research relates to the field solving the problem of increasing the efficiency transmission and noise immunity reception discrete messages used for the exchange traffic flows between communication systems and radio engineering complexes of entities. The object of the study is hardware and software systems and radio channels multiservice communication networks using multi-antenna technologies. Multi-antenna systems in multiservice communication networks allow increasing the capacity radio channels by transmitting a signal using several antennas on the transmitter side and several antennas on the receiver side. It is worth noting that the capacity of the radio channel is still limited due to the use of a power distribution algorithm. The efficiency and noise immunity indicators of the functioning of communication systems in the presence of interference sources are analyzed based on the architectural concept of the following and future public communication networks. The subject area is the problems applying a new approach to multiservice communication networks for optimal use resources end-to-end digital technology and modern wireless cellular communication technologies. The purpose of the study is to develop a new approach to constructing a method for calculating the evaluation of the characteristics of transmission efficiency and noise immunity when receiving traffic flow messages in a complex signal-noise environment. Based on the methods for calculating the evaluation of the performance indicators of multiservice communication networks, important analytical expressions for further research were obtained. As a result of the study, the main conclusions of the study were obtained, which can be implemented and used in multiservice stationary and wireless cellular networks to calculate the transmission efficiency and reception noise immunity indicators. The technical and economic effect for multiservice networks and radio engineering complexes consists in increasing their throughput by attracting funds and resources of modern cellular mobile network technologies. The substantiation proposed main stages of the study is provided, the results of the analytical study and simulation modeling are presented, confirming the validity of the theoretical conclusions made.

  • New
  • Research Article
  • 10.20998/2522-9052.2026.1.09
MATHEMATICAL MODELING AND STABILITY ANALYSIS OF VISUAL LOCALIZATION ALGORITHMS UNDER BRIGHTNESS AND NOISE VARIATIONS
  • Jan 26, 2026
  • Advanced Information Systems
  • Kostiantyn Dergachov + 2 more

Visual localization algorithms are an integral part of modern robotics and navigation systems, providing object position determination based on visual features or images. However, their effectiveness is largely dependent on external factors, such as image brightness and noise level, which directly affect landmark recognition and coordinate accuracy. Subject of research: analysis of the impact of image brightness and noise on the accuracy and stability of adaptive localization algorithms. The purpose of the work is to quantify the impact of image parameters on the robustness of various localization methods and to identify algorithms most suitable for real-time operation under unstable visual conditions. Research methods: A two-factor experimental design with brightness and noise level variables was applied, within which a series of localization experiments were conducted. Mathematical modeling was performed to obtain analytical dependences of the minimum, average, and maximum localization errors for four algorithms – Proximity, Centroid, Weighted Centroid, and Lateration. Based on the obtained models, a stability coefficient was introduced as an indicator of the algorithm's robustness. Results: the constructed regression models demonstrated high adequacy and allowed us to visualize the influence of brightness and noise on localization accuracy. It was found that the Weighted Centroid and Lateration methods provide the highest stability of operation, maintaining low error variation when changing image parameters, while the Proximity and Centroid algorithms showed greater sensitivity to noise and lighting fluctuations.

  • New
  • Research Article
  • 10.20998/2522-9052.2026.1.13
MACHINE LEARNING BASED CLOUD COMPUTING INTRUSION DETECTION
  • Jan 26, 2026
  • Advanced Information Systems
  • Akwaeno Isong + 4 more

Based on today’s technologically networked world, a sophisticated networking technology known as Software-Defined Networking (SDN) is utilized in cloud computing environments to improve the effectiveness of network management. However, SDN’s centralized nature makes it vulnerable to DDoS attacks. This study introduces a technique for detecting DDoS attacks within a cloud computing setting. The research seeks to apply an ensemble machine learning approach for statistically identifying DDoS attacks in cloud network traffic, categorizing them as either harmful or harmless. Various machine learning algorithms, including K-Nearest Neighbors, Random Forest (RF), and Decision Tree, were utilized as foundational classifiers in the suggested ensemble machine learning model. A dataset of SDN–DDoS attacks was utilized to assess the efficacy of the base classifiers. The classifiers were trained using 80% of the dataset and evaluated on 20%. The results of the experiment indicated that the Random Forest and Random Forest classifiers attained 100% accuracy, whereas the K-Nearest Neighbor classifier achieved an accuracy of 98.21%. The ensemble machine learning model employed a majority voting technique for final prediction and achieved an accuracy of 100% on the test set, ranking as the best compared to benchmark models.

  • New
  • Research Article
  • 10.20998/2522-9052.2026.1.11
NEURAL NETWORK MODELING AND FORECASTING OF IMBALANCES IN UKRAINE’S LABOR MARKET UNDER EXTREME CONDITIONS
  • Jan 26, 2026
  • Advanced Information Systems
  • Oleksandr Kushnerov + 4 more

Relevance. The full-scale military invasion of the Russian Federation has caused unprecedented distortions in the labour market of Ukraine. These deformations are characterized by deep sectoral and territorial disproportions, which are caused by mass migration, mobilization, destruction of production, and changes in the structure of labor supply and demand. This causes an urgent need to develop tools to quantify and predict said deformations, which is essential for making informed decisions. The purpose of this research is to develop and test a complex technique based on neural network modelling (Long Short-Term. Memory – LSTM). This methodology aims to identify, assess, and forecast labour market deformations and imbalances in Ukraine, and includes the development of a system of criteria for their evaluation. The research methodology is based on an integrated approach that incorporates time series analysis, neural network forecasting (LSTM), methods for detecting structural shifts and anomalies (Isolation Forest), cluster analysis (K-Means), and determination of influencing factors (Random Forest). The research presents a developed system of criteria for assessing war-induced deformations, conducts a quantitative evaluation of sectoral disruptions resulting from the conflict, provides a forecast of imbalance dynamics, and identifies the most vulnerable sectors of the economy. The conclusions emphasise the scientific and practical significance of the developed methodology for monitoring the labour market, as well as for developing adaptive employment policies and programs to support the post-war recovery of the Ukrainian economy. They also demonstrate the potential of neural network models for analysing labour markets under extreme conditions нof uncertainty.

  • New
  • Research Article
  • 10.20998/2522-9052.2026.1.01
MATHEMATICAL MODEL FOR CALCULATING THE EXPERT'S COMPETENCY LEVEL
  • Jan 26, 2026
  • Advanced Information Systems
  • Svitlana Krepych + 3 more

In the context of the rapid development of information technologies, software quality is becoming critical for the successful operation of organizations in various industries. The growing complexity of modern software solutions requires the involvement of highly qualified specialists in software testing and quality assessment, capable of effectively identifying shortcomings and ensuring that the product meets established standards. At the same time, assessing the level of competence of such experts remains a difficult task, which is often based on subjective criteria and methods. The relevance of the study is due to the acute need of the modern IT market for objective tools for assessing the professional level of specialists, especially in the field of software quality assurance. Traditional approaches to qualification assessment, such as interviews, test tasks or resume analysis, often do not provide a complete and objective picture of the expert's competence. This problem becomes especially acute in the conditions of the global labor market, when companies are forced to evaluate specialists remotely, relying only on a limited set of data on their experience and skills. Today, software has become an integral part of many areas of our everyday life - from automation and optimization of production processes to creating comfort for an individual. The object of the study is the process of determining the level of competence of experts in software quality assessment. The subject of the study is a mathematical model for calculating the level of competence of an expert. The practical value of the results of the work is determined by the possibility of using the developed system by HR managers for effective selection of specialists, by heads of QA departments for the formation of balanced testing teams, by certification centers for objective assessment of competence, as well as by the experts themselves for planning their own professional development. Conclusion the developed mathematical model for calculating the level of competence of an expert allows you to reduce the time for assessing the competence of specialists, minimize the influence of subjective factors when making personnel decisions, and optimize the distribution of human resources in software development and testing projects.

  • New
  • Research Article
  • 10.20998/2522-9052.2026.1.04
DEVELOPMENT OF A METHOD FOR CORRECTING THE PLACEMENT OF THE REGION OF INTEREST
  • Jan 26, 2026
  • Advanced Information Systems
  • Oleksandr Laktionov + 3 more

Objective. The process of developing a method for correcting the placement of the region of interest for a tracker has been investigated. The method is based on a nonlinear variable combination methodology that accounts for horizontal and vertical gradients. The justification for selecting the optimal method was carried out considering the number of operations per pixel and the computational complexity of the studied area. The accuracy criterion for region of interest placement correction was variance. To demonstrate the advantages of the proposed method, multiple video streams with varying frame counts were input into the tracker. A comparison was made with the well-known Channel and Spatial Reliability Tracker combined with a Kalman filter featuring different configurations. Results. A method for correcting region of interest placement using a nonlinear methodology requiring 8 operations per pixel has been developed. This method operates in conjunction with the tracker. In experimental videos, the variance decreased by an average of 10.25%, whereas existing methods showed deterioration ranging from -3.61% to -47.63%. The obtained results confirmed compliance with Technology Readiness Level 4. Scientific Novelty. The developed method for correcting the placement of the examined area in the object tracking task differs from existing ones by using combinations of nonlinear variables that take gradient analysis into account. This allows determining the displacement point of the region of interest based on horizontal and vertical gradients. Practical Significance. The proposed method can be used as an additional tool for real-time object tracking.

  • New
  • Research Article
  • 10.20998/2522-9052.2026.1.12
NON-TAYLOR DIFFERENTIAL GAMING PATTERN ECLIPSE ATTACK ON BLOCKCHAIN NODE
  • Jan 26, 2026
  • Advanced Information Systems
  • Olha Hryshchuk + 1 more

Relevance. Information technologies of the 21st century have profoundly reshaped the global economy. As financial processes become increasingly digitalized, the role of traditional banking institutions as intermediaries is gradually diminishing. In this evolving landscape, blockchain technologies and cryptocurrencies have emerged as revolutionary tools, offering decentralized and secure alternatives to conventional financial systems. Cryptocurrencies, built on blockchain foundations, combine high reliability with robust protection against cyberattacks. However, both individual hackers and organized cybercriminal groups continue to target blockchain infrastructures – focusing not only on isolated nodes but also on entire networks and cryptocurrency wallets. Ensuring the resilience of blockchain technologies against such threats is therefore critical to safeguarding users’ digital assets. Eclipse Attacks involve isolating a node to gain control over its information flows, posing a serious threat to network integrity. The object of research. This study introduces a differential game-theoretic model of Eclipse Attacks on blockchain nodes, formulated within a Markov chain framework. The subject of the research. The proposed model employs non-Taylor differential transformations developed by Academician G. Pukhov, enabling a more flexible analytical representation of attack dynamics. The purpose of this paper. The framework captures the strategic interaction between attacker and defender, offering a basis for assessing node security under adversarial conditions. Research results. As a result, the study provides a practical analytical toolkit for developing effective countermeasures against Eclipse Attacks and contributes to the broader discourse on cybersecurity in decentralized systems.

  • New
  • Research Article
  • 10.20998/2522-9052.2026.1.06
METHOD OF TEST POOL SYNTHESIS FOR AN INTELLIGENT HIGH-DENSITY IOT EDGE-LAYER GATEWAY
  • Jan 26, 2026
  • Advanced Information Systems
  • Volodymyr Panchenko + 4 more

Relevance. High-density IoT environments are characterized by a large concentration of sensors and devices that exchange data intensively within a limited space. Under such conditions, edge-layer intelligent gateways become particularly important. These gateways can locally process information, optimize traffic, and ensure consistent interaction among heterogeneous devices. The development of a test pool for an edge-layer intelligent gateway in high-density IoT is relevant due to the rapid growth in the number of connected devices and the increase in their spatial density. In such conditions, the gateway must maintain stable operation despite high levels of radio interference and competition for network resources. An additional challenge is the heterogeneity of the IoT environment, as devices use different protocols, have different data formats, and exhibit diverse load profiles. Without a specially constructed test pool, it is impossible to reliably evaluate the behavior of the gateway under a realistic mix of technologies and topologies. However, due to substantial heterogeneity, the space of possible test-pool configurations has very high dimensionality. Moreover, there are significant time and resource constraints associated with operating the test pool. The subject of this study is the methods for constructing test pools. The purpose of the article is to develop a method for synthesizing a test pool for an edge-layer intelligent gateway in high-density IoT. The following results were obtained. A five-layer architecture of an edge-layer intelligent gateway for high-density IoT is proposed. The operational specifics of the gateway and the particular aspects of its testing are identified. The task of synthesizing the test pool is reduced to a combinatorial problem of selecting an optimal configuration within an extremely large state space. To solve it, the use of a classical genetic algorithm is proposed. The proposed algorithm made it possible, within an acceptable time, to obtain a test pool with nearly minimal execution time, a minimal number of tests, and maximal coverage of the gateway components. Conclusion. The proposed method enables the construction of a test pool for an intelligent gateway within a high-dimensional state space while meeting the specified requirements. Future research concerns the development of a method for reducing the dimensionality of the state space of individual tests for gateway components.

  • New
  • Research Article
  • 10.20998/2522-9052.2026.1.03
INPUT MATERIAL FLOW VALUES GENERATOR OF A CONVEYOR WITH A GIVEN CORRELATION FUNCTION AND DISTRIBUTION LAW
  • Jan 26, 2026
  • Advanced Information Systems
  • Oleh Pihnastyi + 2 more

The object of this study is a stationary stochastic input flow of material arriving at the input of an industrial conveyor transport system. The goal of this research is to develop a universal, statistically mathematical model of the input flow of materials, fully identifiable from a single long-term experimental implementation, as well as to create a multi-level system of dimensionless stochastic similarity criteria, enabling the objective classification and comparison of heterogeneous flows with similar structural properties. The results obtained. A simplified canonical decomposition of a stationary ergodic process with a minimum number of random coefficients is proposed, reproducing the specified mathematical expectation, variance, correlation function, and one-dimensional probability density of flow values. Analytical expressions are derived for approximating the distribution density of random coefficients with guaranteed fulfillment of the conditions of centering, normalization, and non-negativity. A multilevel system of stochastic similarity criteria is developed, including aggregated dimensionless criteria, a functional similarity criterion based on a normalized autocorrelation function, and a functional criterion based on quantile-quantile diagrams. A dimensionless flow normalization method is proposed, ensuring model transferability between conveyor systems differing by orders of magnitude in throughput and time scales. Using six independent long-term implementations of real conveyor systems in the mining and processing industries, the accuracy of the developed stochastic input flow generator using an analytical approximation of random coefficients is demonstrated. Conclusion. The developed methodology enables the classification and comparison of material input flows in transport systems and serves as the basis for a universal approach to constructing mathematical models and flow control algorithms under stochastic uncertainty.

  • New
  • Research Article
  • 10.20998/2522-9052.2026.1.10
AN ADAPTIVE MODEL FOR SOFTWARE CODE QUALITY ASSESSMENT IN REFACTORING TASKS BASED ON FUZZY LOGIC
  • Jan 26, 2026
  • Advanced Information Systems
  • Sergii Liubarskyi + 3 more

The article's objective is to develop a hybrid adaptive model for assessing software code quality based on code smell characteristics by combining fuzzy logic and machine learning methods to enhance the objectivity and efficiency of refactoring. The methodology underlying this research is aimed at developing a hybrid adaptive model for software code quality assessment. It combines fuzzy logic and artificial intelligence methods, specifically an adaptive neuro-fuzzy inference system (ANFIS). The multi-layered ANFIS implements the Takagi-Sugeno fuzzy inference with the ability to learn using gradient methods. The methodology is based on a hybrid approach that integrates expert knowledge with the automated training of the model on real data. Results. The research resulted in the development of a hybrid adaptive model for software code quality assessment based on fuzzy logic and the ANFIS. This model allows for automated, objective, and flexible code quality assessment in refactoring tasks. The model uses eight key code smell metrics: WMC, DIT, RFC, LCOM, NOA, NOC, CBO, and FANOUT. Their normalization and processing are performed using fuzzy logic based on the Takagi-Sugeno algorithm. This ensures that the uncertainty and subjectivity of expert evaluations are taken into account. The ANFIS architecture allows the model to learn from real data, with subsequent automated adjustment of the membership function parameters and rule weights. This enables the model to adapt to various technology stacks and projects. The use of trapezoidal membership functions increases the accuracy of modeling critical code smell zones, while the hybrid learning algorithm based on gradient descent ensures high precision in determining code quality, ultimately contributing to improved software efficiency, maintainability, scalability, and security. The scientific novelty of the research lies in the development of a hybrid adaptive model for software code quality assessment. Unlike existing models, this one is based on fuzzy logic and an ANFIS, which combines expert knowledge with automated training on real data to enhance the objectivity and efficiency of the refactoring process. The proposed ANFIS architecture with trapezoidal membership functions is used to process eight key code smell metrics (WMC, DIT, RFC, LCOM, NOA, NOC, CBO, FANOUT) within the context of Takagi-Sugeno fuzzy inference. This provides a flexible, interpretable, and adaptive assessment of code quality with the ability to automatically tune model parameters based on gradient learning, which significantly increases the accuracy of code quality determination and the model's suitability for various technology stacks and projects. The practical significance of the research lies in the direct implementability and integration of the developed hybrid adaptive model for software code quality assessment into existing static analysis tools and DevOps processes, specifically as plugins for Continuous Integration/Continuous Delivery (CI/CD) systems. This will enable automated, objective, and adaptive monitoring of code quality in real time. In addition, the model has significant potential for extension to various programming languages and technology stacks by analyzing large datasets from open-source repositories, which will enhance its universality and accuracy. A promising direction for future work is to improve the ANFIS architecture by incorporating deep learning methods, which would allow for the automatic detection of new code smells and their interdependencies. The development of interpretable mechanisms to explain the model's decisions will increase developer trust in the system and promote its widespread adoption in both industrial software development and educational processes in software engineering and cybersecurity.