Articles published on Software implementation
Authors
Select Authors
Journals
Select Journals
Duration
Select Duration
8929 Search results
Sort by Recency
- New
- Research Article
- 10.17587/mau.27.106-112
- Feb 6, 2026
- Mekhatronika, Avtomatizatsiya, Upravlenie
- O V Karsaev
The object of research in the article is the information technology of autonomous group control of a multi-satellite system for remote sensing of the Earth. An orbital grouping of small satellite clusters is considered as case study. Clusters refer to satellites that are located close to each other. The information technology is based on an agent-oriented approach and information interaction of satellite software agents based on the use of inter-satellite communications. The information interaction of agents is considered taking into account the dynamics of the establishment of inter-satellite communication lines over time. It is assumed that communication within clusters is possible in real time, and communication between clusters is only possible within time intervals during which the necessary conditions are met. The purpose of information interaction protocols is the autonomous group solution (without the participation of the ground control complex) of the following tasks: 1) distribution and redistribution of requests within each cluster and 2) between clusters, 3) determination and coordination of the procedure for transmitting observation data to Earth during the establishment of communication sessions with ground points, and 4) search for an appropriate distribution of survey data between clusters using inter-satellite communication, ensuring a reduction in the time of delivery of survey data to Earth. In the process of participating in information interaction protocols, agents perform autonomous planning of the targeted use of their satellite. This planning includes finding an acceptable shooting plan and forming a flight plan taking into account the technical capabilities and limitations of the satellite, as well as taking into account the control of the electrical balance. Demonstration of capabilities and evaluation of the effectiveness of information technology is carried out using software implementation of its simulation model. The article provides a description of examples of output data that are generated during simulation modeling.
- New
- Research Article
- 10.1021/acs.jproteome.5c01038
- Jan 20, 2026
- Journal of proteome research
- Thang V Pham + 8 more
Protein quantification is a crucial data processing step that combines quantitative values at the peptide or fragment level into protein levels in mass spectrometry-based proteomics. However, many of the current algorithms, including the state-of-the-art method MaxLFQ, do not scale well with the increasing number of samples, because of the limited system memory and algorithmic complexities. Here we introduce the iq format, a novel data structure designed to support very large data sets. We optimize existing quantification methods for both speed and memory usage. In particular, the new algorithms maxlfq-bit and rlm-cd significantly improve the base methods, MaxLFQ and the robust linear model, respectively, achieving orders of magnitude speed improvements for a large number of samples. The experimental result shows that the MaxLFQ algorithm achieves the highest accuracy, despite its comparatively higher computational cost. We also introduce a generic algorithm to boost the quantification accuracy of all methods by reducing the effect of noisy ion intensity traces. The experimental results show that the weighting approach improves the performance of all tested methods on a spike-in data set and a mixed species data set. The software implementation is publicly available in the R package iq from version 2.
- New
- Research Article
- 10.69849/revistaft/dt10202601202142
- Jan 20, 2026
- Revista ft
- Leticia Col Debella Santos + 1 more
ABSTRACT This study analyzes the implementation of software designed to automate pallet-rack project development within a metalworking company, integrating agile methodologies with structured project management practices. The initiative is organized into four sequential phases—automated floor plan generation, technical views, bill of materials, and 3D modeling—supported by iterative deliveries and continuous validation. The research outlines the project scope, stakeholder mapping, resources, risks, quality plan, and financial structure, emphasizing how agile practices enhance transparency, adaptability, and operational efficiency. Expected outcomes include a significant reduction in project lead time, elimination of manual quantitative errors, improvement in design standardization, and increased productivity across engineering and production workflows. The project contributes to the company’s broader digital transformation strategy by strengthening technical reliability and enabling scalable process automation. Keywords: Agile methodologies; Project automation; Pallet racking.
- New
- Research Article
- 10.1063/5.0305862
- Jan 13, 2026
- The Journal of chemical physics
- Gregor Häfner + 1 more
We present an open-source, graphics processing unit (GPU)-accelerated software implementation of the Uneyama-Doi model (UDM) for studying the collective dynamics of block copolymer blends and solutions. The UDM provides a field-theoretic framework that includes the entropy of mixing, binary interactions between segment species, and molecular connectivity, thereby capturing interfacial properties even in the strong-segregation regime. Our implementation utilizes a semi-implicit time-stepping scheme, incorporates thermal noise, and employs a concentration-conserving regularization algorithm that maintains non-negative concentrations. Spatial derivatives and convolutions are computed via optimized CUDA-based pseudo-spectral methods, enabling simulations of systems spanning tens of polymer end-to-end distances and thousands of molecular relaxation times within hours on a single GPU. We validate the implementation against established results, including the mean-field phase diagram of diblock copolymers, structure factors of disordered systems, and the fluctuation-induced order-disorder transition for symmetric copolymers. Dynamic simulations reproduce experimentally observed amphiphilic morphologies, including micellar lattices, vesicles, and phase-separated structures. The software provides an efficient and versatile tool for investigating equilibrium and nonequilibrium behavior of complex polymer systems.
- Research Article
- 10.3390/bdcc10010022
- Jan 6, 2026
- Big Data and Cognitive Computing
- Gonzalo Nápoles + 4 more
Fuzzy Cognitive Maps (FCMs) are a type of recurrent neural network with built-in meaning in their architecture, originally devoted to modeling and scenario simulation tasks. These knowledge-based neural systems support feedback loops that handle static and temporal data. Over the last decade, there has been a noticeable increase in the number of contributions dedicated to developing FCM-based models and algorithms for structured pattern classification and time series forecasting. These models are attractive since they have proven competitive compared to black boxes while providing highly desirable interpretability features. Equally important are the theoretical studies that have significantly advanced our understanding of the convergence behavior and approximation capabilities of FCM-based models. These studies can challenge individuals who are not experts in Mathematics or Computer Science. As a result, we can occasionally find flawed FCM studies that fail to benefit from the theoretical progress experienced by the field. To address all these challenges, this survey paper aims to cover relevant theoretical and algorithmic advances in the field, while providing clear interpretations and practical pointers for both practitioners and researchers. Additionally, we will survey existing tools and software implementations, highlighting their strengths and limitations towards developing FCM-based solutions.
- Research Article
- 10.3390/electronics15020255
- Jan 6, 2026
- Electronics
- Mihai Rotaru + 5 more
This study presents a unified symbolic–numerical framework for the automatic generation and conversion of two-port network parameters, including Z, Y, H, F, T (A, B, C, and D), and S matrices. The method integrates Modified Nodal Analysis (MNA) with exact symbolic computation to derive transfer functions, poles, zeros, and parameter sensitivities directly from the circuit topology, eliminating the need for manual algebraic manipulation. Unlike conventional tools such as PSpice 9.1 or RF simulation software* which operate primarily on numerical models, the proposed approach provides closed-form expressions suitable for analytical design, optimization, and parameter-tolerance evaluation. The implemented software routines generate all parameter sets within a single workflow and enable bidirectional conversion between low-frequency formulations and high-frequency scattering representations. Numerical case studies on band-pass filters confirm the correctness of the generated expressions, with deviations below 1% relative to reference simulation results.
- Research Article
- 10.1016/j.asr.2025.10.077
- Jan 1, 2026
- Advances in Space Research
- Feifei Chen + 2 more
An innovative kinematic framework for space robot systems: Kinematic modeling, path planning and software implementation
- Research Article
- 10.31294/co-science.v6i1.10072
- Jan 1, 2026
- Computer Science (CO-SCIENCE)
- Yoga Pratrian + 2 more
The increasing demand for fast, accurate, and efficient healthcare services has encouraged the development of integrated information systems. This study aims to design and develop a Health Service Information System (SILAKES) based on web and mobile platforms at the Oputa Yi Koo Heart and Blood Vessel Hospital (RSJPD) in Kendari. The system is developed to facilitate patients in accessing doctor schedules, taking queue numbers online, consulting services, checking blood stock availability, and filling out satisfaction surveys. The development process adopts the waterfall (linear sequential model) method, which consists of five stages (requirement definition, system and software design, implementation and unit testing, integration and system testing, operation and maintenance). The results of this study show that SILAKES improves the efficiency of hospital staff and provides convenience for patients to access health services without space and time constraints. The implementation of this system also contributes to the digitalization of hospital services and supports the enhancement of overall healthcare service quality
- Research Article
- 10.47026/1810-1909-2025-4-98-110
- Dec 30, 2025
- Vestnik Chuvashskogo universiteta
- Aleksandr I Orlov + 2 more
Modern energy and electronic systems comprise a large number of nonlinear elements operating in dynamic modes. Accurate modeling of such systems requires proper consideration of nonlinear current-voltage characteristics (CVC) and transient processes, which is especially important when designing rectifiers, inverters, control systems and other power electronics devices. Existing electrical circuit simulators do not always provide users with the necessary flexibility, scalability, or compatibility with enterprise safety standards, and may have legal restrictions. Custom effective modeling methods for such systems allow for creating specialized software solutions to analyze dynamic modes of electrical circuits, free from the limitations of commercial simulators and tailored to specific engineering and scientific tasks. The purpose of the work is to develop a method for numerical modeling of dynamic modes of electrical circuits comprising semiconductor diodes or other elements with nonlinear CVC, described by nodal equations. The scientific novelty lies in the development of a homotopy approach to overcoming high condition number of the Jacobian matrix when solving nonlinear differential-algebraic equations (DAEs) of electrical circuits, based on deformation of CVC with adaptation of the deformation parameter depending on the residual norm and condition number of the matrix; in the development of a method for extracting linearly independent differential equations from DAEs of electrical circuits without preliminary analysis of their topology; in the development of a universal stamp of a nonlinear element, based on linearization of the functional in the vicinity of the current approximation, allowing for integration of diode models with various CVCs into nodal equations. Materials and methods. Theoretical electrical engineering methods were used in the work, including the modified nodal potential method. The proposed numerical modeling method involves integration of elements with nonlinear CVC into nodal equations; extraction of differential and algebraic parts from DAEs, based on singular value decomposition of the matrix standing before the derivative vector; transformation of DAEs into a nonlinear system of algebraic equations using backward differentiation formulas (BDF) with variable time step. Initial points for BDF were determined by the diagonally implicit Runge–Kutta method of second order accuracy. Numerical solution of the obtained nonlinear equations was performed by the damped Newton–Raphson method. To reduce the condition number of the Jacobian matrix in transient modes, when the spread of differential conductivities reaches 12 orders and higher, a homotopy approach was proposed, consisting of gradual deformation of the diode CVC from a smoothed to the original curve during convergence, while maintaining a given value of the condition number. Results. To demonstrate the proposed solutions, computer simulation of a bridge rectifier operating on an active-inductive load with two types of diode CVC was performed: piecewise-linear and smooth, corresponding to the Shockley equation with series resistance. The deformation parameter and damping coefficient were adaptively changed depending on the residual norm of the functional and the condition number of the Jacobian matrix. Comparison of simulation results with different methods of specifying diode CVC showed that differences appear predominantly in transient processes of switching diode operation modes. It has been found that to ensure convergence of numerical solution in diode switching modes, characterized by high condition number of the Jacobian matrix, the homotopy approach is more effective than diagonal regularization. The proposed method for numerical modeling of dynamic modes of electrical circuits with nonlinear elements has a natural algorithmic structure, allowing for simple software implementation. Conclusions. 1. The most universal diode stamp, obtained on the basis of linearization of the functional derived from the CVC equation in the vicinity of the current approximation, has been identified. 2. A method for extracting linearly independent differential equations from DAEs of electrical circuits without preliminary analysis of circuit topology has been proposed. 3. A method for calculating the Jacobian matrix for solving nonlinear DAE has been proposed. 4. To ensure convergence of numerical solution with high condition number of the Jacobian matrix, it is preferable to apply the homotopy approach.
- Research Article
- 10.15587/1729-4061.2025.347800
- Dec 30, 2025
- Eastern-European Journal of Enterprise Technologies
- Masuma Mammadova + 4 more
The object of the study is the clinical decision-making process for selecting of hepatocellular carcinoma (HCC) treatment method based on the patient's medical data. The process of the HCC treatment method selection remains poorly formalized and is characterized by multi-criteria and the presence of numerous clinical situations, for each of which it is necessary to promptly identify the most accurate therapeutic solution. This study develops an intelligent medical decision support system for HCC treatment method selection based on knowledge applicable in clinical practice. It offers architectural and functional principles of the intelligent decision support system, classifying clinical situations by HCC treatment method, consisting of multiple possible combinations of 44 informative parameters. Based on the current values of these parameters, expert knowledge is transformed into production rules identifying HCC treatment methods. A heuristic procedure treatment selection is developed based on production rule analysis in accordance with current parameter values, reproducing the reasoning patterns of participants in a multidisciplinary council during their consensus decision-making process regarding HCC treatment appointment. A software implementation of a decision-making model for HCC treatment selection, implemented in C# using the Visual Study 2019 platform, enabled the integration of an intelligent system with web technologies. The intelligent medical decision support system automates the unique experience of professionals and helps physicians in a multidisciplinary consultation make prompt and informed decisions online regarding the appointment of personalized therapy. The system was piloted with expert physicians in several iterations until complete match between the consensus decision of the multidisciplinary council and the decision made by the developed system in accordance with clinical recommendations was achieved
- Research Article
- 10.31474/1999-981x-2025-2-29-40
- Dec 30, 2025
- JOURNAL of Donetsk mining institute
- Sergey Vlasov + 3 more
Purpose. To perform a comparative analysis of the results of field measurements of the convergence of the edge of the mining working and numerical modeling of its deformations in 2D and 3D settings using the Phase-2 and SolidWorks software environments according to the Hoek-Brown and Mohr-Coulomb strength criteria in order to determine the reliability and practical applicability of various methods for predicting the stability of workings in geomechanical conditions of deep horizons of mines in the Western Donbass. Method. The study performed field measurements of the convergence of the marginal part of the 873 aggregated drift of the West Donbass mine. Numerical modeling of the stability of the mining workings in a planar setting (2D) in the Phase-2 environment using the Hoek-Brown and Mohr-Coulomb strength criteria, as well as in a spatial setting (3D) in the SolidWorks environment using the Mohr-Coulomb strength criterion was performed. Calculation models were built, taking into account the geometry of the workings and the lithological section of the massif. The results of the numerical prediction were compared with actual measurements to determine the reliability of different modeling approaches and their suitability for practical prediction of the stability of workings in the geomechanical conditions of the West Donbass. Results. It was found that 2D calculations in Phase-2 show overestimated values of the working height (+14.3% according to the Hoek-Brown strength criterion and +18.9% according to the Mohr-Coulomb strength criterion), while 3D calculation in SolidWorks showed an underesti-mated value (–8.9% according to the Mohr-Coulomb strength criterion). Thus, none of the methods gives a complete correspondence to the field measurements, but all results are within the error range of up to 20%, which indicates their high reliability. The obtained results indicate the feasibility of numerical modeling in several software environments for mutual verification of predictions and increasing the accuracy of assessing the stability of mining workings. Scientific novelty. For the first time, a comparative analysis of the correspondence of field measurements of the convergence of a mining operation to the results of numerical modeling in different software environments (Phase-2 and SolidWorks) taking into account different strength criteria, in the conditions of mines of Western Donbass, was performed. It was proven that the use of several software implementations of the numerical method allows detecting deviations due to the peculiarities of the problem formulation (2D or 3D) and provides a more reliable prediction of the stability of the workings in complex geomechanical conditions. Practical significance. The results of the study can be used to increase the reliability of mine workings stability forecasts in complex geomecha-nical conditions of the Western Donbass. The use of numerical modeling in different software environments (Phase-2 and SolidWorks) provides the possibility of mutual verification of results and allows mining engineers to more reasonably choose parameters and develop measures to support the workings. This contributes to reducing the risks of emergency deformations, optimizing design solutions and increasing the safety of mine operation at great depths.
- Research Article
- 10.31866/2617-796x.8.2.2025.347946
- Dec 29, 2025
- Digital Platform: Information Technologies in Sociocultural Sphere
- Oleksandr Tkachenko + 1 more
Today, the scope of application of unmanned aerial vehicles is expanding, particularly in logistics, high-precision agronomy, defence, and monitoring tasks. All this highlights the problem of ensuring their autonomy and high navigation accuracy. An analysis of the current market for software for autonomous control of unmanned aerial vehicles reveals a shortage of necessary software solutions, which complicates the selection of the most effective software package. The purpose of the article is to study modern software tools designed to determine the coordinates and ensure the navigation and localisation of uncrewed aerial vehicles in autonomous mode, as well as the functional and technical capabilities of these devices. The research methodology includes a comparative analysis of the leading software solutions in this subject area (unmanned aerial vehicles). The article discusses approaches to developing and operating a software solution for determining coordinates during the autonomous flight of such aircraft, based on the use of an extended Kalman filter, which facilitates the fusion of data from inertial, visual, and satellite systems. Conclusions. The paper summarises fundamental methods for determining coordinates. It substantiates the need for sensor fusion to minimise cumulative error and ensure fault tolerance, which became the basis for further software implementation. An analysis of the UAV software market is conducted, where modern software tools are classified according to architectural principles, highlighting the dichotomy between open platforms (such as ArduPilot and PX4) and commercial ecosystems. It was established that open platforms provide greater flexibility, which is necessary for implementing new SLAM algorithms. At the same time, commercial solutions offer a high degree of integration and compliance with regulatory requirements. To implement reliable coordinate determination in closed environments and environments without GNSS, a software complex based on the Robot Operating System was developed (this ensured a modular approach and ease of scaling).
- Research Article
- 10.37385/jaets.v7i1.7151
- Dec 29, 2025
- Journal of Applied Engineering and Technological Science (JAETS)
- Linett Velasquez-Jimenez + 4 more
This study presents the design and implementation of a multiplatform inventory management system developed for a public university in Peru, aiming to improve process efficiency and user satisfaction. Following an agile development methodology (SCRUM), the system was designed using modular architecture and responsive interfaces to ensure compatibility across devices and browsers. The usability evaluation was carried out using the CSUQ questionnaire, and the results were transformed to the SUS scale to assess the overall experience. A descriptive quantitative methodology was used, supported by surveys and technical compatibility testing. The findings reveal high user satisfaction, a SUS score of 93.75 ("Best imaginable"), and strong performance across all functionalities, particularly in navigation and inventory tracking. These results confirm the effectiveness of agile development in higher education contexts and highlight the importance of user-centered design in administrative systems.
- Research Article
- 10.31866/2617-796x.8.2.2025.347949
- Dec 29, 2025
- Digital Platform: Information Technologies in Sociocultural Sphere
- Svitlana Popereshniak + 1 more
Ensuring the accuracy of laser beam guidance is a key condition for the quality of modern museum and multimedia installations. Even minor dynamic deviations caused by vibrations or thermal deformations lead to a loss of projection clarity and a decrease in the immersion effect. The purpose of the article is to develop and test an embedded software system to compensate for dynamic errors in laser projection guidance in real time for museum and multimedia installations. The research methodology is mathematical modelling of dynamic deviations, computer vision algorithms for detecting and tracking laser marks, sensor fusion of data from inertial measurement units (IMUs), as well as modular software implementation on single-board computers running Linux. The work uses system analysis to evaluate existing approaches, experimental testing to verify the performance of algorithms, and comparative tests with classical stabilisation methods. The novelty of the research lies in the creation of an affordable and resource-saving system that combines CMOS sensors, light spot detection algorithms and quaternion integration of IMU data. Such an architecture enables the processing of streaming video at a frequency of approximately 90 frames/s, with low hardware requirements, allowing for the high-quality compensation of errors without the need for expensive opto-mechanical equipment. The conclusion of the research. The article identifies the main problems of dynamic stabilisation of laser projections, analyses modern hardware and software solutions, and develops and tests the author’s built-in correction system. The results obtained confirm that the proposed approach enables the enhancement of accuracy and stability in projections for museum and multimedia applications, offering a combination of cost-effectiveness, scalability, and technical reliability.
- Research Article
- 10.61260/2218-130x-2025-4-117-130
- Dec 27, 2025
- Scientific and analytical journal «Vestnik Saint-Petersburg university of State fire service of EMERCOM of Russia»
- Viktor Akapyev + 3 more
The growing digitalization of all areas of human activity makes it necessary to train various categories of users, from ordinary information consumers to cybersecurity specialists, to solve the problems of information security. Just as cybersecurity analysts use various software tools in their work, various software products are used in the training of specialists. In the course of the study, it is necessary to select the most suitable tools for the educational process, formulate requirements for the implementation of a dialogue mode and a user interface, and propose a software implementation option.
- Research Article
- 10.33271/nvngu/2025-6/148
- Dec 26, 2025
- Naukovyi Visnyk Natsionalnoho Hirnychoho Universytetu
- I S Laktionov + 3 more
Purpose. To conduct a multi-criteria evaluation and analysis of the performance of encryption algorithms that may be potentially resistant to contemporary cyberattacks, including quantum attacks. The evaluation takes into account the ability of the algorithms to be deployed on devices with limited computational resources within the infocommunication networks during the transmission of information messages. Methodology. Software implementation, testing and validation of selected cryptographic algorithms based on Python, considering the impact of limited resources and destabilising factors, such as signal noise components, based on computer experiments were applied. The performance of the studied cryptographic algorithms was analysed using statistical data processing methods and a multi-criteria evaluation approach. Findings. The symmetric algorithms AES-256-GCM and ChaCha20-Poly1305 demonstrated the highest accuracy in signal recovery following encryption and decryption (MSE ranges from 1.95 · 10-6 to 5.12 · 10-5). The time taken to encrypt and decrypt I/Q signals using symmetric algorithms was found to be around 2.5 times faster than that required by the Kyber family. Computer experiments confirmed the existence of a trade-off between processing speed and security level. Symmetric algorithms are optimal for scenarios with critical processing speed requirements. However, Kyber provides greater protection reliability, albeit at the cost of additional resources. The correctness of the proposed computer model, which enables the computational and information-functional characteristics of cryptographic algorithms to be evaluated, has been proven. Originality. Patterns of the destabilising influence of signal-to-noise ratio indicators and signal length on the accuracy of digital signal recovery after encryption have been established for different cryptographic algorithms (AES, ChaCha20 and the Kyber) in the context of their use in resource-constrained infocommunication systems. Practical value. Implementing the computer model proved its suitability for studying cryptographic algorithms in resource-constrained environments, as well as its potential for improving information security protocols and selecting optimal algorithms based on processing speed requirements and desired security levels.
- Research Article
- 10.33271/nvngu/2025-6/069
- Dec 26, 2025
- Naukovyi Visnyk Natsionalnoho Hirnychoho Universytetu
- O Yu Mykhailenko + 4 more
Purpose. The purpose of the article is to develop a methodology for creating cone crusher adaptive control system based on the model-based design method for automated software generation of microprocessor controllers. Methodology. A method based on a block-oriented predictive model was used to generate control signals for the cone crusher. The parameters and structure of this model were identified in real time using measured data from the plant. A prototype of the control system was created in MATLAB/Simulink. Then, the model-based design method was used to generate software for digital signal processors. Mathematical statistics methods were employed to analyze the experimental results. Findings. A method of model-based design of an adaptive control system for a cone crusher has been developed. This system uses a predictive model of a block-oriented structure. This model adjusts the predictive controller’s structure and parameters directly during operation. This approach makes it possible to divide the functions of identifying the process model and generating controls between the two digital controllers. Consequently, the average computational time is reduced while ensuring the stabilization of the degree of homogeneity of ore crushing and separate output of the control size class, with respective standard deviation coefficients not exceeding 3.42 and 1.83 %, respectively. Originality. The regularity of the effect of the closed-side setting and the eccentric speed on the particle size distribution of crushed ore has been established. This shows that high homogeneity of the crushed product is ensured by simultaneously adjusting these input coordinates. We propose a new method for synthesizing an adaptive cone crusher control system based on a model-based design approach. This method provides automated real-time generation of software for microprocessor-based controllers, allowing the system to quickly adjust to changes in rock mass characteristics and other disturbances. Practical value. A hardware and software implementation of an adaptive control system for a cone crusher is proposed. This system is based on a block-oriented predictive model. The model ensures the stabilization of the required ore particle size distribution. This stabilization is achieved by adjusting the closed-side setting and the eccentric speed. The system is based on 16-bit, low-cost digital signal processors. A prototype of the system was tested in a crushing plant at a metallurgical enterprise.
- Research Article
- 10.3390/electronics15010100
- Dec 24, 2025
- Electronics
- Duc-Thuan Dam + 3 more
Post-quantum cryptography (PQC) is rapidly being standardized, with key primitives such as Key Encapsulation Mechanisms (KEMs) and Digital Signature Algorithms (DSAs) moving into practical applications. While initial research focused on pure software and hardware implementations, the focus is shifting toward flexible, high-efficiency solutions suitable for widespread deployment. A system-on-chip is a viable option with the ability to coordinate between hardware and software flexibly. However, the main drawback of this system is the latency in exchanging data during computation. Currently, most SoCs are implemented on FPGAs, and there is a lack of SoCs realized on ASICs. This paper introduces a complete RISC-V SoC design in an ASIC for Module Lattice-based KEM. Our system features a RISC-V processor tightly integrated with a high-efficiency Number Theoretic Transform (NTT) accelerator. This accelerator leverages custom instructions to accelerate cryptographic operations. Our research has achieved the following results: (1) The accelerator provides a speedup of up to 14.51× for NTT and 16.75× for inverse NTT operations compared to other RISC-V platforms; (2) This leads to end-to-end performance improvements for ML-KEM of up to 56.5% for security level I, 50.9% for level III, and 45.4% for level V; (3) The ASIC design is fabricated using a 180 nm CMOS process at a maximum operating frequency of 118 MHz with an area overhead of 8.7%. The chip achieved a minimum power consumption of 5.913 μW at 10 kHz and 0.9 V of supply voltage.
- Research Article
- 10.15588/1607-3274-2025-4-3
- Dec 24, 2025
- Radio Electronics, Computer Science, Control
- R M Babakov + 2 more
Context. The problem of algebraic synthesis of finite state machine with datapath of transitions is considered. The circuit of this state machine may require less hardware expenses and have a lower cost compared to circuits of other classes of digital control units. The object of research is the process of finding complete and partial solutions of the problem of algebraic synthesis of finite state machine using specialized algorithms. One of such algorithms is the previously known algorithm of complete sequential enumeration of state coding variants with a fixed set of transition operations. In the vast majority of cases, complete sequential enumeration is performed too long, which makes its practical application in the process of synthesizing of finite state machines with operational transformation of state codes impossible. This paper proposes a new approach, which consists in replacing complete sequential enumeration of state coding variants with pseudo-random coding. This allows you to increase the number of state codes that change in each iteration of the algorithm and can contribute to a faster search for satisfactory solutions to the algebraic synthesis problem.Objective. Development and research of an algorithm for finding solutions to the algebraic synthesis problem of a finite state machine with datapath of transitions based on pseudo-random selection of state codes.Method. The research is based on the structure of finite state machine with datapath of transitions. The synthesis of the finite state machine circuit involves a mandatory stage of algebraic synthesis, the result of which is the combination of a certain way of states encoding with the assignment of arithmetic-logical operations to state machine transitions. Such combination is called the solution to the algebraic synthesis problem. In the general case, there are many solutions for a given finite state machine, each of which can be either complete (when operations are mapped to all transitions) or partial (when part of transitions cannot be implemented using any of the given operations). The more transitions are implemented by given operations, the less hardware expenses will be required to implement the state machine circuit and the better solution found. The search for the best solution requires consideration of a large number of possible variants of states encoding. The paper includes a modification of a previously known algorithm, which consists in replacing the complete sequential enumeration of variants of states encoding with pseudorandom code generation. Both algorithms were implemented in the form of software using the Python language and tested on the example of a finite state machine that implements an abstract control algorithm. In the course of the experiments, it was investigated which of the algorithms would find the best solution to algebraic synthesis problem in a fixed time. The experiments were repeated for different sets of transition operations. The purpose of the experiments was to evaluate which of state code assignment strategies is more effective: sequential enumeration of state codes or their pseudo-random generation.Results. Using the example of an abstract control algorithm, it is demonstrated that in general, pseudo-random assignment of state codes allows finding better solutions to the algebraic synthesis problem in the same time than sequential enumeration of state codes. Factors such as computer speed or the method of pseudo-random generation of state codes do not have a significant impact on the results of the experiments. The advantage of pseudo-random generation of state codes is preserved when using different sets of transition operations.Conclusions. The basis of the algebraic synthesis of finite state machine with datapath of transitions is an algorithm for finding solutions to the algebraic synthesis problem. The article proposes an algorithm for finding such solutions based on pseudo-random encoding of finite state machine states. The software implementation of this algorithm has proven that such approach is generally better than sequential enumeration for state encoding variants, since it allows finding better solutions (solutions with fewer operationally unimplemented transitions) in the same time. The pseudo-random assignment of state codes can be the basis of future algorithms for the algebraic synthesis of finite state machines.
- Research Article
- 10.15588/1607-3274-2025-4-4
- Dec 24, 2025
- Radio Electronics, Computer Science, Control
- O M Berezsky + 2 more
Context. The article addresses the problem of image similarity assessment based on the Fréchet distance metric and its modifications. In this context, images are approximated by polygonal curves. The problem arises from the need to quantitatively evaluate image similarity for tasks such as image generation, clustering, and recognition. Quantitative assessment of the proximity of biomedical images supports decision-making in automated diagnostic systems. The object of the study is the process of image similarity evaluation. The subject of the study is the Fréchet distance metric and its modifications.Objective. To develop a method for determining the fuzzy discrete Fréchet distance, to evaluate the computational complexity of the proposed method, to implement the algorithm for determining the fuzzy discrete Fréchet distance in software, and to conduct computational experiments to evaluate the fuzzy discrete Fréchet distance between polygons.Method. The article presents a method for determining the fuzzy discrete Fréchet distance based on the fuzzy Fréchet metric between polygonal curves. The fuzzy Fréchet metric is grounded in the classical Fréchet distance defined on the space of parameterized curves. The required approximation for practical applications is achieved through the discretization of the fuzzy Fréchet metric. The developed method estimates the fuzzy discrete Fréchet distance between polygonal curves by adapting the algorithm for computing the classical discrete Fréchet distance.Results. The computer experiments were conducted on a set of predefined regions approximated by polygonal curves. Based on the proposed method, an algorithm was developed to evaluate the discrete fuzzy Fréchet distance. The developed algorithm exhibits low computational complexity, equal to the product of the discretized segments of the polygonal curves: O(Cm·n). This enables the estimation of the discrete Fréchet distance with a specified similarity threshold. The software implementation of the method is intended to be integrated into an automatic medical diagnostic system.Conclusions. The results obtained in the study allow recommending the developed method for evaluating image similarity based on the fuzzy discrete Fréchet distance for broad application in computer vision systems, including image generation, clustering, and recognition.