Sort by
DEVELOPMENT OF A 150 W LINEAR LABORATORY POWER SUPPLY UNIT

The laboratory power supply is an indispensable device for the manufacture, testing, adjustment and repair of electronic equipment. The object of research is the process of selecting and justifying the device circuit on a modern element base, developing a printed circuit board, manufacturing and testing the created device layout in practice. The toroidal transformer of the required power was also calculated. The test was carried out in the voltage and current stabilization mode, and the data obtained indicate the possibility of further improvement of the device's circuitry. To build a laboratory power supply, it is desirable to use a linear circuit, because of the small pulsations that are critical when powering electronic equipment. A switching power supply can be used as an additional source when working with high-power circuits. A universal option is a bipolar power supply built according to a linear circuit with an output voltage of 0...±30 V and a current of 0...5 A. This solution will allow the device to be used for operation with most radio electronic devices, including those sensitive to RF noise, whose power consumption does not exceed 150 W. Thanks to the use of a bipolar circuit, it is also possible to work with high-quality audio frequency amplifiers that require bipolar power supply, operational amplifiers and some digital equipment. The galvanic isolation of the channels will make it possible to adjust the output parameters independently for each arm, which may be necessary when repairing digital equipment. Equipping the laboratory power supply with a current stabilization unit will make it possible to use the power supply as a battery charger and help in finding short circuits in circuits. Short-circuit protection will save the power supply in the event of an emergency and, in some cases, save the connected load.

Open Access
Relevant
ADAPTIVE DOMAIN-SPECIFIC NAMED ENTITY RECOGNITION METHOD WITH LIMITED DATA

The ever-evolving volume of digital information requires the development of innovative search strategies aimed at obtaining the necessary data efficiently and economically feasible. The urgency of the problem is emphasized by the growing complexity of information landscapes and the need for fast data extraction methodologies. In the field of natural language processing, named entity recognition (NER) is an essential task for extracting useful information from unstructured text input for further classification into predefined categories. Nevertheless, conventional methods frequently encounter difficulties when confronted with a limited amount of labeled data, posing challenges in real-world scenarios where obtaining substantial annotated datasets is problematic or costly. In order to address the problem of domain-specific NER with limited data, this work investigates NER techniques that can overcome these constraints by continuously learning from newly collected information on pre-trained models. Several techniques are also used for making the greatest use of the limited labeled data, such as using active learning, exploiting unlabeled data, and integrating domain knowledge. Using domain-specific datasets with different levels of annotation scarcity, the fine-tuning process of pre-trained models, such as transformer-based models (TRF) and Toc2Vec (token-to-vector) models is investigated. The results show that, in general, expanding the volume of training data enhances most models' performance for NER, particularly for models with sufficient learning ability. Depending on the model architecture and the complexity of the entity label being learned, the effect of more data on the model's performance can change. After increasing the training data by 20%, the LT2V model shows the most balanced growth in accuracy overall by 11% recognizing 73% of entities and processing speed. Meanwhile, with consistent processing speed and the greatest F1-score, the Transformer-based model (TRF) shows promise for effective learning with less data, achieving 74% successful prediction and a 7% increase in performance after expanding the training data to 81%. Our results pave the way for the creation of more resilient and efficient NER systems suited to specialized domains and further the field of domain-specific NER with sparse data. We also shed light on the relative merits of various NER models and training strategies, and offer perspectives for future research.

Open Access
Relevant
AN OVERVIEW OF COMPONENTS FOR ENERGY EFFICIENT MULTIMEDIA NETWORKS BASED ON 5G RADIO ACCESS TECHNOLOGIES

Modern society is actively transitioning into an information-based one, and multimedia technologies have become an integral part of this process. Thanks to the proliferation of wireless access networks, users are becoming more mobile, and the development of the fifth generation of mobile networks, known as 5G, is a significant step in the advancement of information and communication technologies. 5G networks offer low latency and reliable connectivity, expanding the capabilities of mobile internet and machine communication. However, along with the opportunities provided by multimedia communication, there is a responsibility to consider the impact of associated technologies on the environment, as well as to address new challenges and the need for prudent resource utilization.
 The article defines the concept of "multimedia" and discusses various aspects of this concept, including digital storage and processing of information, components (text, photos, audio, and video), interactivity, and hypertextuality. It is noted that the transmission of multimedia data and the use of information technologies are closely linked to wireless access networks.
 The authors discuss challenges and solutions in the field of energy efficiency for networks. They provide statistics indicating that the information and communication technology (ICT) industry is responsible for a significant portion of global energy consumption and CO2 emissions, with radio access networks being a major contributor. Various components and technologies that can contribute to the development of energy-efficient multimedia networks based on 5G radio access technologies are examined. Specifically, heterogeneous networks, non-orthogonal multiple access (NOMA) technologies, and multiple-input multiple-output (MIMO) technologies are highlighted as key components for achieving energy efficiency.
 The importance of using heterogeneous networks to reduce the distance between transmitters and receivers is emphasized, along with the possibility of putting small base stations into sleep mode when there is no network load. Technologies like NOMA and MIMO are discussed as crucial components for achieving spectral efficiency and energy efficiency.
 Additionally, the article focuses on wireless sensor networks (WSNs) and suggests ways to optimize them for energy efficiency. This includes operating sensor units only when necessary, implementing wireless charging, using energy-efficient optimization methods, and applying efficient routing schemes.
 The authors also highlight the role of green data centers in reducing CO2 emissions and optimizing the use of green energy in high-performance networks. Methods such as using renewable energy sources, increasing the energy efficiency of hardware, and implementing energy-efficient routing are discussed.
 In conclusion, the article underscores the importance of energy efficiency and reduced CO2 emissions in modern multimedia networks, particularly in the context of 5G networks. It calls for interdisciplinary efforts to address these critical challenges in the field of information and communication technologies.

Open Access
Relevant
ACCREDITATION AND PROSPECTS OF HACCP SYSTEM IMPLEMENTATION IN FOOD PRODUCTION

The need for accreditation, implementation of the system of analysis of hazardous factors and determination of critical control points (HACCP) at food industry enterprises as a quality management system based on environmental friendliness and product safety is well-founded. The main task in the development of the industry is to increase the competitiveness of products, to strengthen the innovative focus by implementing quality management systems that ensure the quality of products at all stages of its production (life) cycle and contribute to increasing the effectiveness of the work of enterprises [4]. Such a food safety management system, which has proven its effectiveness and is accepted at the international level, is the HACCP system. Guaranteeing food safety is the main goal of applying the HACCP concept to the production process. There are many factors that are not related to the production and processing of products, but have a negative impact on the safety of food products. One aspect of the spread and implementation of this product safety management system is that it is a kind of basis for forming the quality of organic products. When certifying organic products, the necessary prerequisites are documentation of all stages of production and the ability to trace the path "from the doe to the table". Accordingly, the lion's share of the documents that should be at the processing enterprises of organic agricultural products have already been developed, provided that the HACCP system is implemented, and the requirements for product safety are also met. The implementation of the HACCP system requires planning the control of all areas of the technological process, defining the limits of research, application and maintenance of this system. The HACCP plan is drawn up at the enterprise - a document that, in accordance with the principles, determines the procedures and sequence of actions to ensure the control of hazard factors. The HACCP plan covers all areas of product production, from raw material acquisition to product processing or packaging, as well as supply to retail.

Open Access
Relevant
RESEARCH OF AUDIO AND VIDEO TRAFFIC THE CHARACTERISTICS IN LOW-BANDWIDTH RADIO COMMUNICATION NETWORKS

The construction of mass service systems, namely automated control systems, requires preliminary analysis and modeling of traffic in their communication networks. Mathematical models of various types of traffic have been developed for public networks, which allows to estimate the necessary functional characteristics of equipment for building a communication network, depending on the number of users. Low-bandwidth communication networks, which are built on the basis of ultra high frequency and very high frequency (UHF/VHF) radio stations, are distinguished by low speed, high delay and jitter of data transmission. To work in such communication networks, special data transmission protocols are adapted and developed. In this paper, a study of the characteristics of audio and video traffic in low-bandwidth communication networks is carried out, which are built on the basis of UHF/VHF radio stations, which will allow creating a software implementation of simulated traffic modeling for the further determination of the services availability at the stage of planning and designing the communication system. Two personal computers and two modern RF‑7850M-HH UHF/VHF radio stations were used to study the characteristics of audio traffic. The radio stations worked in three operating modes: narrowband mode with a fixed carrier frequency FF, narrowband mode with pseudo-random adjustment of the operating frequency QL1A, and wideband mode ANW2C. The voice was transmitted in digital mode using the built-in MELP 2400 codec, and the "iperf-v2.0.5" software was used to determine the characteristics of the audio traffic on personal computers connected to these radio stations. Two personal computers, two modern RF‑7850M‑HH UHF/VHF radio stations, a video encoder and a video camera were used to study video traffic characteristics. The radio stations operated in ANW2C broadband mode. To evaluate the characteristics of the video traffic, the Wireshark software was used on a personal computer, with the help of which the video broadcast from the video encoder was presented. It was found that voice transmission in low-speed communication networks based on UHF/VHF radio stations occupies a bandwidth of 2 Kbit/s - 2.5 Kbit/s, and when voice and data are simultaneously transmitted in radio stations, data buffering and jitter increase. The resolution, bitrate, FPS, and necessary bandwidth of video traffic that can be transmitted via UHF/VHF radio communication channels are determined. Based on the conducted research, recommendations are provided for the transmission of video traffic through low-bandwidth communication channels.

Open Access
Relevant
SYSTEM FOR DATA SELECTION AND ANALYSIS FOR SCIENTIFIC RESEARCH

The paper introduces a sophisticated system designed for the meticulous selection and analysis of data in the realm of scientific research. This system empowers researchers to perform various data operations, including sorting, duplicate removal, data clustering, parameter-based filtration, elimination of empty records, and more. Data Selection System, Data Analysis in Scientific Research, Data Selection Methods for Scientific Research, Systems for Scientific Research, Data Processing and Interpretation. In this work, a robust system is presented to facilitate the systematic selection and analysis of data, catering specifically to the intricacies of scientific research. The system offers a comprehensive suite of operations, allowing researchers to perform essential tasks such as sorting data for improved organization, eliminating duplicate entries to enhance data integrity, clustering data to uncover patterns, filtering based on specific parameters to focus on relevant subsets, and clearing empty records for a refined dataset. Researchers can leverage advanced sorting functionalities to organize data based on specified parameters, enhancing data readability and facilitating a structured approach to analysis. The system incorporates mechanisms for identifying and removing duplicate entries, ensuring data accuracy and reliability in scientific investigations. Advanced clustering algorithms empower researchers to discern patterns within datasets, providing valuable insights crucial for scientific exploration. A key feature enables researchers to apply targeted filters based on specific parameters, refining datasets to focus on subsets of information relevant to their research objectives. Recognizing the importance of data completeness, the system adeptly manages and clears empty records, ensuring the integrity of analyses and facilitating more accurate research outcomes. A comprehensive tool designed for the purposeful extraction and refinement of data in scientific research workflows. The systematic examination and interpretation of data to derive meaningful insights within the context of scientific investigations. Varied approaches and techniques employed to selectively curate and prepare data for rigorous scientific analysis. Technological frameworks developed to enhance efficiency and effectiveness in scientific inquiry. The systematic manipulation and understanding of data to extract valuable information and insights. This framework represents a pivotal advancement in the realm of scientific data management, providing researchers with a versatile toolset to elevate the precision and efficiency of their data-driven investigations.

Open Access
Relevant
SELECTION OF DATABASES TO STORE GEOSPATIAL-TEMPORAL DATA

The proliferation of geospatial-temporal data, driven by the widespread adoption of sensor platforms and the Internet of Things, has escalated the demand for effective data management solutions. In this context, GeoMesa, an open-source toolkit designed to enable comprehensive geospatial querying and analytics in distributed computing systems, plays a pivotal role. GeoMesa seamlessly integrates geospatial-temporal indexing capabilities with databases like Accumulo, HBase, Google Bigtable, and Cassandra, facilitating the storage and management of extensive geospatial datasets. This article addresses the critical need to benchmark and compare the performance of Accumulo and Cassandra when employed as underlying data stores for GeoMesa. By conducting performance tests, we aim to provide valuable insights into the relative strengths and weaknesses of these database systems, thereby aiding decision-makers in selecting the most suitable solution for their specific application requirements. The evaluation includes an in-depth analysis of performance metrics, such as throughput and latency, as well as consideration of system parameters, query density, and data access distribution. It was identified that Accumulo outperforms Cassandra almost in all areas – read latency and resource usage under heavy load and write latency under any load. In turn, Cassandra has lower read latency under low load and CPU usage under heavy load.

Open Access
Relevant
APPLICATION OF CALCULATION METHODS FOR SOLVING APPLIED PROBLEMS OF HEAT AND MASS EXCHANGE IN COMPLEX SYSTEMS

The accuracy of the calculation and optimization of the objective function and its parameters when solving applied engineering problems depends on the accuracy of the formulation of the main optimization problem, calculation and applied optimization problems, as well as the accuracy of computational methods for their implementation. An increase in the considered features of applied optimization problems will complicate the setting and methods of implementing boundary value problems. Thus, for the implementation of modernized boundary value problems, it will be necessary to apply several computational methods that will create a computational structure. The main condition for constructing a physically based boundary value problem is to find and justify the conditions for the existence of a unique solution. To increase the efficiency of the use of methods of calculation and optimization of technical parameters, it is necessary to increase the number of considered features of calculation and applied optimization mathematical models for heat and mass transfer in technical systems. Along with the construction of boundary value problems, it is important to define and justify the conditions for the existence of a single solution.
 The research article deals with some aspects of solving applied problems of heat and mass transfer in technical systems. Nonlocal boundary value problems for inhomogeneous and homogeneous pseudodifferential equations in partial derivatives with integral boundary conditions are considered, methods of solving a nonlocal inhomogeneous boundary value problem are proposed, and the correctness conditions of this problem in the class of infinitely differentiable generalized functions of power growth are defined and proven. Proved conditions for the existence of a correct problem for pseudo-differential equations with an integral boundary condition. The research of this article should be applied for controlling possible risks when solving applied problems in technical systems, biotechnology and veterinary medicine.

Open Access
Relevant
CONTROL OF THE SHEAR STRAIN IN THE HOMOGENIZATION ZONE OF THE DISK EXTRUDER

A simple method for determining the quality of a polymer melt is very important in production. A method that provides simplicity and quick use, making it valuable for optimizing extrusion processes.
 This work describes how the mixing effect can be improved by changing the velocity fields and thereby changing the shear strain values. The shear strain has been calculated for all four channels of the homogenization zone of a disk extruder. It was found that changing the disk rotation frequency allows changing mean value of the channel shear strain rate from 3324 to 4966 with constant performance for the entire homogenization zone. The impact of the different channels on the mixing effect was evaluated, with the first, second, third, and fourth channels contributing 21%, 57%, 15%, and 7% respectively to the overall homogenization zone's effectiveness of mixing. Additionally, it was found that the second channel experiences the highest shear strain values. In the case of the third channel, although the shear rate values are relatively high, the short residence time of the melt element results in comparatively smaller shear strain values. The average shear strain values remain constant for a given disk speed. It is indicated that it is possible to assess the quality of mixing by the amount of energy supplied, which simplifies the process of determining the quality of the melt, as well as its regulation.

Open Access
Relevant