Year
Publisher
Journal
1
Institution
Institution Country
Publication Type
Field Of Study
Topics
Open Access
Language
Filter 1
Year
Publisher
Journal
1
Institution
Institution Country
Publication Type
Field Of Study
Topics
Open Access
Language
Filter 1
Export
Sort by: Relevance
Dimensionality cutback and deep learning algorithms efficacy as to the breast cancer diagnostic dataset

Breast cancer is a significant threat because it is the most frequently diagnosed form of cancer and one of the leading causes of mortality among women. Early diagnosis and timely treatment are crucial for saving lives and reducing treatment costs. Various medical imaging techniques, such as mammography, computed tomography, histopathology, and ultrasound, are contemporary approaches for detecting and classifying breast cancer. Machine learning professionals prefer Deep Learning algorithms when analyzing substantial medical imaging data. However, the application of deep learning-based diagnostic methods in clinical practice is limited despite their potential effectiveness. Deep Learning methods are complex and opaque; however, their effectiveness can help balance these challenges. The research subjects. Deep Learning algorithms implemented in WEKA software and their efficacy on the Wisconsin Breast Cancer dataset. Objective. Significant cutback of the dataset's dimensionality without losing the predictive power. Methods. Computer experiments in the WEKA medium provide preprocessing, supervised, and unsupervised Deep Learning for full and reduced datasets with estimations of their efficacy. Results. Triple sequential filtering notably reduced the dimensionality of the initial dataset: from 30 attributes up to four. Unexpectedly, all three Deep Learning classifiers implemented in WEKA (Dl4jMlp, Multilayer Perceptron, and Voted Perceptron) showed the statistically same performance. In addition, the performance was statistically the same for full and reduced datasets. For example, the percentage of correctly classified instances was in range (95.9-97.7) with a standard deviation of less than 2.5 %. Two clustering algorithms that use neurons (Self Organized Map, SOM, and Learning Vector Quantization, LVQ) have also shown similar results. The two clusters in all datasets are not well separated, but they accurately represent both preassigned classes, with the Fowlkes–Mallow indexes (FMI) ranging from 0.81 to 0.99. Conclusion. The results indicate that the dimensionality of the Wisconsin Breast Cancer dataset, which is increasingly becoming the "gold standard" for diagnosing Malignant-Benign tumors, can be significantly reduced without losing predictive power. The Deep Learning algorithms in WEKA deliver excellent performance for both supervised and unsupervised learning, regardless of whether dealing with full or reduced datasets.

Read full abstract
Open Access
The use of artificial intelligence in adapting process of UI design system for end customer requirements

This paper demonstrates an approach for developing an AI-based UI design system to improve a company white labeling (aka rebranding) process. This is the process of removing a product or service's original branding and replacing it with the branding of another company or individual. The main objectives of the research include the development of methods for optimizing rebranding, automating the delivery of designer work results, and achieving project-wise improvement in the design adaptation process for the end distributor, known as the white-labeling process. The research objective is to analyze the existing rebranding process and to analyze ready-made solutions using artificial intelligence to improve it. This research identifies innovative methods for implementing artificial intelligence in the rebranding process to facilitate and speed up tasks related to design and marketing. Research methods include analyzing existing rebranding practices, considering ready-made solutions using artificial intelligence, and conducting experiments and practical application of new methods to improve the process. The scientific novelty of this research lies in the implementation of artificial intelligence in the rebranding field and the development of effective methods for its improvement. As a result, improvements are achieved through the deployment of an AI-driven solution, meticulously engineered around the design token concept, serving as a pivotal element for standardizing and harmonizing the work of designers. This methodology involves a comprehensive adjustment of the AI model to seamlessly integrate with existing design systems, thereby facilitating the transformation of design systems and brand books into tangible design tokens. The process of integrating AI into design workflows involves extensive model training using openly accessible community data. Careful consideration is given to the selection of datasets, ensuring that they meet rigorous criteria for evaluating the quality and efficacy of artificial intelligence learning. These criteria encompass factors such as data relevance, diversity, and representativeness, as well as considerations for ethical and legal compliance. As a conclusion: by leveraging this meticulously crafted approach, organizations can effectively harness the power of AI to drive transformative change in design processes, ultimately enhancing efficiency, consistency, and innovation across their operations. By adopting various AI integration aspects, this paper provides an updated UI design process with the ability to use AI during client-centric design development.

Read full abstract
Open Access
Towards the improvement of project team performance based on large language models

The subject of the study is a method for identifying poor quality project sprint task descriptions to improve team performance and reduce project risks. The purpose of the study is to improve the quality of textual descriptions of sprint tasks in tracking systems by implementing models for identifying and improving potentially poor task descriptions. Research Questions: 1. Can poor quality project sprint task descriptions be identified using clustering? 2. How to utilize the power of large language models (LLMs) to identify and improve textual descriptions of tasks? Objectives: to analyze research on approaches to improving descriptions using clustering and visualization techniques for project tasks, to collect and prepare textual descriptions of sprint tasks, to identify potentially poor task descriptions based on clustering their vector representations, to study the effect of prompts on obtaining vector representations of tasks, to improve task descriptions using LLMs, and to develop a technique for improving project team effectiveness based on LLMs. Methods of vector representation of texts, methods of dimensionality reduction of PCA and t-SNE data space, methods of agglomerative clustering, methods of prompting were used. The following results were obtained. An approach to improving the performance of the project team based on the use of LLM was proposed. Answering the first research question, it was found that there are no linguistic features affecting the perception of textual descriptions of project sprint tasks. In response to the second research question, a model for identifying potentially poor task descriptions is proposed to reduce project risks associated with misunderstanding of task context. Conclusions. The results suggest that project sprint task descriptions can be improved by using large-scale language models for project team understanding. Future research recommends using project source documentation and project context as a vector repository and source of context for LLM. The next step is to integrate the LLM into the project task tracking system.

Read full abstract
Open Access
Classification of disinformation in hybrid warfare: an application of XLNet during the Russia’s war against Ukraine

The spread of disinformation has become a critical component of hybrid warfare, particularly in Russia’s war against Ukraine, where social media serves as a battlefield for influence and propaganda. This study develops a comprehensive methodology for classifying disinformation in the context of hybrid warfare, focusing on Russia’s war against Ukraine. The objective of this study is to address the challenges of disinformation detection, particularly the increased spread of propaganda due to hybrid warfare. The study focuses on the use of transformer-based language models, specifically, XLNet, to classify multilingual, context-sensitive disinformation. The tasks of this study are to analyze current research and develop a methodology to effectively classify disinformation using the XLNet model. The proposed methodology includes several key components: data preprocessing to ensure quality, application of XLNet for training on diverse datasets, and hyperparameter optimization to handle the complexities of disinformation data. The study used datasets containing pro-Russian and neutral/pro-Ukrainian tweets, and the XLNet model demonstrated strong performance metrics, including high precision, recall, and F1-scores across different dataset sizes. Results showed that accuracy initially improved with increasing data volume but declined slightly with numerous datasets, suggesting the need for balancing data quality and quantity. The proposed methodology addresses the gaps in automated disinformation detection by integrating transformer-based models with advanced preprocessing and training techniques. This research improves the capacity for real-time detection and analysis of disinformation, thus contributing to public information governance and strategic communication efforts during wartime.

Read full abstract
Open Access
The reliably stable neural network controllers' synthesis with the transient process parameters optimization

The subject of this paper is to develop a method for synthesizing stable neural network controllers with optimization of transient process parameters. The goal is to develop a method for synthesizing a neural network controller for control systems that guarantees the closed-loop system stability through automated selection of Lyapunov function with the involvement of an additional neural network trained on the data obtained in the solving process the integer linear programming problem. The tasks to be solved are: study the stability of a closed-loop control system with a neural network controller, train the neurocontroller and Lyapunov neural network function, create an optimization model for the loss function minimization, and conduct a computational experiment as an example of the neural network stabilizing controller synthesis. The methods used are: a neural network-based control object simulator training method described by an equations system taking into account the SmoothReLU activation function, a direct Lyapunov method to the closed-loop system stability guarantee, and a mixed integer programming method that allows minimizing losses and ensuring stability and minimum time regulation for solving the optimization problem. The following results were obtained: the neural network used made it possible to obtain results related to the transient process time reduction to 3.0 s and a 2.33-fold reduction in overregulation compared to the traditional controller (on the example of the TV3-117 turboshaft engine fuel consumption model). The results demonstrate the proposed approach's advantages, remarkably increasing the dynamic stability and parameter maintenance accuracy, and reducing fuel consumption fluctuations. Conclusions. This study is the first to develop a method for synthesizing a stabilizing neural network controller for helicopter turboshaft engines with guaranteed system stability based on Lyapunov theory. The proposed method's novelty lies in its linear approximation of the SmoothReLU activation function using binary variables, which allowed us to reduce the stability problem to an optimization problem using the mixed integer programming method. A system of constraints was developed that considers the control signal and stability conditions to minimize the system stabilization time. The results confirmed the proposed approach's effectiveness in increasing engine adaptability and energy efficiency in various operating modes.

Read full abstract
Open Access
Analysis of the implementation efficiency of digital signal processing systems on the technological platform SoC ZYNQ 7000

The subject of this paper is the analysis of DSP algorithm implementations based on HLS synthesis and SIMD instructions acceleration on the SoC hardware platform. The goal of this article is to analyze various FIR filter software and hardware implementations based on the technological platform SoC ZYNQ 7000 while obtaining metrics of hardware resource consumption, power efficiency, and execution performance. The tasks are as follows: determine the ways of implementing algorithms; choose the analysis criteria for multivariate experiment; implement algorithms using SIMD instructions on the ARM part of the given SoC; implement algorithms using High-Level Synthesis for the FPGA part; and measure and obtain the results for each signal topology. The used methods: High-Level Synthesis, optimization techniques based on vector instructions, and multivariate experiment analysis. The following results were obtained: for the given criteria and metrics. The FIR filter was implemented on the ZedBoard development platform with SoC ZYNQ 7000. The data were obtained from post-synthesis power analysis and dynamic SoC consumption using tools from Xilinx and Analog Devices. The corresponding IP blocks were implemented using High-Level Synthesis. The experiment was completed to obtain execution performance metrics. Conclusions. The scientific novelty of the obtained results is summarized as follows: the competitor analysis was performed for the set of implementations of the given algorithms deployed on the ZYNQ platform using both SIMD instructions and several HLS-based topologies for the FPGA-offload execution strategy. The analysis of the multivariate experiment was also completed for selected criteria, power consumption, filtering speed (inverse value – delay), and the amount of hardware costs as a percentage of the used resources.

Read full abstract
Open Access
Performance evaluation of inset feed microstrip patch antenna parameters with different substrate materials for 5G wireless applications

This study evaluates the performance of an inset feed-microstrip antenna for various substrate materials (FR4, Rogers 5880, Rogers 6002, Polystyrene, and Ceramic) with different thicknesses (1.6 mm, 3.2 mm, and 4.8 mm) for 5G applications, focusing on key parameters such as return loss, efficiency, directivity, and realized gain. The goal is to determine the optimal substrate material and thickness that offers the best combination of these performance metrics across a frequency range of 3 to 4 GHz. The proposed method uses a new hybrid GA-PSO algorithm with Dynamic Adaptive Mutation and Inertia Control (DAMIC). The study optimized the MSPA design for each material and thickness, followed by detailed simulations using the Advanced System Design (ADS) tool. The approach included parametric analysis and systematic comparisons across the chosen substrate materials, quantifying their performance using specified metrics. Results indicate that Rogers 5880 consistently outperforms other substrates in terms of efficiency, directivity, and gain across all thicknesses. Polystyrene and Rogers 6002 also exhibited commendable performance, especially in the thicker substrates (3.2 mm and 4.8 mm), with Polystyrene achieving the highest directivity at 4.8 mm thickness. Rogers 5880 again led the performance in terms of efficiency, with efficiency values consistently above 70 % across all thicknesses, peaking at 86.38 % at 1.6 mm and 86.39 % at 3.2 mm. Ceramic and FR4 substrates demonstrated relatively lower performance, with Ceramic showing a moderate peak efficiency of 75.98 % at 1.6 mm and 50.79 % at 3.2 mm, while FR4 consistently had the lowest efficiency and directivity values, highlighting its limitations for high-performance antenna applications. Considering the return loss, the Rogers 5880 displayed the most favorable return loss characteristics, maintaining values well below -10 dB across the frequency range, which signifies excellent impedance matching. Rogers 6002 and Polystyrene also showed acceptable return loss characteristics although slightly higher than Rogers 5880, and they remained below 10 dB for most frequencies. Ceramic and FR4 exhibited higher return loss values, suggesting poorer impedance matching and higher signal reflection. In conclusion, The GA-PSO DAMIC optimization technique is a highly effective approach for designing antennas for 5G systems, enabling customized solutions for various substrates. Unlike traditional methods, the GA-PSO DAMIC approach enables precise tuning of key antenna parameters—return loss, gain, directivity, and efficiency—across various substrate configurations and thicknesses. The results demonstrate that the Rogers 5880 substrate, particularly at a thickness of 1.6 mm, consistently offers superior performance metrics, including high efficiency and low return loss, confirming its suitability for 3-4 GHz 5G applications. The results also reveal that Rogers 5880 is the superior substrate for high-frequency applications requiring high efficiency, directivity, and gain, followed by Polystyrene and Rogers 6002, particularly for thick substrates. Ceramic and FR4, although adequate in certain scenarios, are generally less optimal for high-performance requirements because of their lower efficiency and higher return loss. These findings provide critical insights into antenna design and material selection, emphasizing the significance of substrate choice in achieving desired performance metrics in modern RF 5G applications.

Read full abstract
Open Access
Development of remote diagnostic monitoring system for pumping equipment with open architecture

The study aim was to develop a remote diagnostic monitoring system for pumping equipment with an open architecture to improve the reliability and efficiency of pump operation in various industrial sectors. The system is designed for the periodic collection and analysis of vibration and temperature signals, which allows for the prompt identification of potential equipment malfunctions and avoidance of emergency shutdowns during the production process. The aim of this study was to develop an effective open architecture for a diagnostic monitoring system for pumping equipment based on IoT technologies. The primary focus is on creating a system architecture that simplifies the installation and operation of equipment, ensures scalability and ease of integration with existing enterprise information systems, and reduces material implementation costs. To achieve this goal, the following objectives were addressed within the study: 1) selection of informative features from vibration signals that allow for the diagnosis of the most common faults in pumping equipment during periodic monitoring; 2) selection of hardware specifications that ensure the diagnostic monitoring system meets the stated requirements; and 3) development of a software and network architecture for the diagnostic monitoring system based on open hardware and software standards. The results of the experiments demonstrated that the developed system enables effective monitoring of the condition of pumping equipment and reduces the risk of emergency shutdowns, thereby optimizing operating costs. The incorporation of wireless technologies, open software products, and standards makes systems flexible and cost-effective, which is especially important for small and medium-sized industrial enterprises. Conclusion. The use of the proposed monitoring system improves the reliability of pumping equipment and maintenance management based on the current state data.

Read full abstract
Open Access
Time series analysis of leptospirosis incidence for forecasting in the Baltic countries using the ARIMA model

Leptospirosis, a zoonotic disease with significant public health implications, presents considerable forecasting challenges due to its seasonal patterns and environmental sensitivity, especially in under-researched regions like the Baltic countries. This study aimed to develop an ARIMA-based forecasting model for predicting leptospirosis incidence across Estonia, Latvia, and Lithuania, where current disease data are limited and variable. This study aims to investigate the epidemic process of leptospirosis, while its subject focuses on applying time series forecasting methodologies suitable for epidemiological contexts. Methods: The ARIMA model was applied to each country to identify temporal patterns and generate short-term morbidity forecasts using confirmed leptospirosis case data from the European Centre for Disease Prevention and Control from 2010 to 2022. Results. The model’s performance was assessed using the Mean Absolute Percentage Error (MAPE), revealing that Lithuania had the most accurate forecast, with a MAPE of 6.841. The accuracy of Estonia and Latvia was moderate, likely reflecting case variability and differing regional epidemiological patterns. These results demonstrate that ARIMA models can effectively capture general trends and provide short-term morbidity predictions, even within diverse epidemiological settings, suggesting ARIMA’s utility in low-resource and variable data environments. Conclusions. The scientific novelty of this study lies in its application of ARIMA modelling to leptospirosis forecasting within the Baltic region, where comprehensive time series studies on the disease are scarce. From a practical perspective, this model offers a valuable tool for public health authorities by supporting targeted interventions, more efficient resource allocation, and timely response planning for leptospirosis and similar zoonotic diseases. The ARIMA model’s adaptability and straightforward application across countries demonstrate its potential for informing public health decision-making in settings with limited data on disease patterns. Future research should expand on this model by developing multivariate forecasting approaches incorporating additional factors to refine the model’s predictive accuracy. This approach could further improve our understanding of leptospirosis dynamics and enhance intervention strategies.

Read full abstract
Open Access
Using artificial intelligence methods for the optimal synthesis of reversible networks

Considering the relentless progress in the miniaturization of electronic devices and the need to reduce energy consumption, technical challenges in the synthesis of circuit design solutions have become evident. According to Moore's Law, the reduction of transistor sizes to the atomic scale faces physical limits, which complicate further development. Additionally, reducing transistor sizes causes current leakage, leading to increased thermal noise, which can disrupt the proper functioning of digital devices. A promising solution to these problems is the application of reversible logic in circuit design. Reversible logic allows for a reduction in energy and information losses because logical reversible operations are performed without loss. The research synthesized optimal reversible circuits based on reversible gates using evolutionary algorithms and compare them with existing analogues. The focus of this study is on logical circuits built using reversible gates, which can significantly reduce energy losses, which is critical for modern and future electronic devices. The synthesis of reversible circuits is closely related to quantum computing, where quantum gates also possess a reversible nature. This enables the use of synthesis methods to create quantum reversible logical computing devices, which in turn promotes the development of quantum technologies. The study focuses on the application of evolutionary artificial intelligence algorithms, specifically genetic algorithms and ant colony optimization algorithms, for the optimal synthesis of reversible circuits. As a result, a detailed description of the key concepts of the improved algorithms, simulation results, and comparison of the two methods is provided. The efficiency of the reversible device synthesis was evaluated using the proposed implementation of the genetic algorithm and the ant colony optimization algorithm. The obtained results were compared to existing analogs and verified using the Qiskit framework in the IBM quantum computing laboratory. The conclusions describe the developed algorithms, which demonstrate high efficiency in solving circuit topology optimization problems. A genetic algorithm was developed, featuring multi-component mutation and a matrix approach to chromosome encoding combined with Tabu search to avoid local optima. The ant colony optimization algorithms were improved, including several changes to the proposed data representation model, structure, and operational principles of the synthesis algorithm, enabling effective synthesis of devices on the NCT basis along with Fredkin gates. An improved structure for storing and using pheromones was developed to enable multi-criteria navigation in the solution space.

Read full abstract
Open Access