Year
Publisher
Journal
1
Institution
Institution Country
Publication Type
Field Of Study
Topics
Open Access
Language
Filter 1
Year
Publisher
Journal
1
Institution
Institution Country
Publication Type
Field Of Study
Topics
Open Access
Language
Filter 1
Export
Sort by: Relevance
OPTIMIZATION OF THE TRAJECTORY OF SENSORS MOTION TAKING INTO ACCOUNT THE IMPORTANCE OF THE AREAS OF THE MONITORING AREA SEGMENTS AND THE PROBABILITY OF DETECTION OF OBJECTS

Due to the widespread use of sensors in data collection and processing, one of the key criteria is the amount of information accumulated and energy efficiency. While monitoring the territory, the movement of research objects is common. As a result there is a change in the probability of their detection in the segment of the territory. Also, segments may be of varying importance. Taking these factors into account will significantly increase the amount of information accumulated. The article presents a method of constructing the optimal trajectory of sensors motion taking into account the importance of territory segments and the probability of detection of objects. The method is based on the representation of distribution of the probability of detection of objects and the importance of territory segments in the form of layers and their integration into a layer of the probable value of detected objects. Seven classes of the probable value of detected objects with corresponding numerical and graphical equivalents are considered. As optimal trajectory of sensors motion the trajectory which provides minimum energy expenditure is meant. Energy efficiency is achieved by constructing a trajectory of minimum length as a solution to the salesman’s problem. The set of points at which the trajectory is built is formed on the basis of the layer of the probable value of the detected objects after the procedure of replacing the nodes. A separate node replacement class, or superposition of node replacement classes, is proposed for each class of probable value of detected objects. Replacement of five, three and two nodes is described. A genetic algorithm with modification of crossing and selection rules was used to find a solution to this problem. A set of trajectories is constructed using the proposed algorithm. The analysis of the obtained results confirmed the efficiency of the developed method and allowed to increase the energy efficiency when covering a given area by 76 %.

Read full abstract
Open Access
THE NONLOCAL PROBLEM FOR FRACTAL DIFFUSION EQUATION

Over the past few decades, the theory of pseudodifferential operators (PDO) and equations with such operators (PDE) has been intensively developed. The authors of a new direction in the theory of PDE, which they called parabolic PDE with non-smooth homogeneous symbols (PPDE), are Yaroslav Drin and Samuil Eidelman. In the early 1970s, they constructed an example of the Cauchy problem for a modified heat equation containing, instead of the Laplace operator, PDO, which is its square root. Such a PDO has a homogeneous symbol |σ|, which is not smooth at the origin. The fundamental solution of the Cauchy problem (FSCP) for such an equation is an exact power function. For the heat equation, FSCP is an exact exponential function. The Laplace operator can be interpreted as a PDO with a smooth homogeneous symbol |σ|^2, σ ∈ Rn. A generalization of the heat equation is PPDE containing PDO with homogeneous non-smooth symbols. They have an important application in the theory of random processes, in particular, in the construction of discontinuous Markov processes with generators of integro-differential operators, which are related to PDO; in the modern theory of fractals, which has recently been rapidly developing. If the PDO symbol does not depend on spatial coordinates, then the Cauchy problem for PPDE is correctly solvable in the space of distribution-type generalized functions. In this case, the solution is written as a convolution of the FSCP with an initial generalized function. These results belong to a number of domestic and foreign mathematicians, in particular S. Eidelman and Y. Drin (who were the first to define PPDO with non-smooth symbols and began the study of the Cauchy problem for the corresponding PPDE), M. Fedoruk, A. Kochubey, V. Gorodetsky, V . Litovchenko and others. For certain new classes of PPDE, the correct solvability of the Cauchy problem in the space of Hölder functions has been proved, classical FSCP have been constructed, and exact estimates of their power-law derivatives have been obtained [1–4]. Of fundamental importance is the interpretation of PDO proposed by A. Kochubey in terms of hypersingular integrals (HSI). At the same time, the HSI symbol is constructed from the known PDO symbol and vice versa [6]. The theory of HSI, which significantly extend the class of PDO, was developed by S. Samko [7]. We extends this concept to matrix HSI [5]. Generalizations of the Cauchy problem are non-local multipoint problems with respect to the time variable and the problem with argument deviation. Here we prove the solvability of a nonlocal problem using the method of steps. We consider an evolutionary nonlinear equation with a regularized fractal fractional derivative α ∈ (0, 1] with respect to the time variable and a general elliptic operator with variable coefficients with respect to the second-order spatial variable. Such equations describe fractal properties in real processes characterized by turbulence, in hydrology, ecology, geophysics, environment pollution, economics and finance.

Read full abstract
METHOD AND ALGORITHMS FOR CALCULATING HIGH-PRECISION ORIENTATION AND MUTUAL BINDING OF COORDINATE SYSTEMS OF SPACECRAFT STAR TRACKERS CLUSTER BASED ON INACCURATE MEASUREMENTS

The problem of increasing the accuracy of determining the orientation of a spacecraft (SC) using a system of star trackers (ST) is considered. Methods are proposed that make it possible to use a joint field of view and refine the relative position of ST to improve the accuracy of orientation determination. The use of several star trackers leads to an increase in the angle between the directions to the stars into the joint field of view, which makes it possible to reduce the condition number of the matrices used in calculating the orientation parameters. The paper develops a combinatorial method for interval estimation of the SC orientation with an arbitrary number of star trackers. To calculate the ST orientation, a linear problem of interval estimation of the orthogonal orientation matrix for a sufficiently large number of stars is solved. The orientation quaternion is determined under the condition that the corresponding orientation matrix belongs to the obtained interval estimates. The case is considered when the a priori estimate of the mutual binding of star trackers can have an error comparable to or greater than the error in measuring the angular coordinates of stars. With inaccurately specified matrices of the mutual orientation of the star trackers, the errors in the mutual orientations of the STs are added to the errors of measuring the directions to the stars, which leads to an expansion of the uncertainty intervals of the right-hand sides of the system of linear algebraic equations used to determine the orientation parameters. A method is proposed for solving the problem of refining the mutual reference of the internal coordinate systems of a pair of ST as an independent task, after which the main problem of increasing the accuracy of spacecraft orientation is solved. The developed method and algorithms for solving such a complex problem are based on interval estimates of orthogonal orientation matrices. For additional narrowing of the intervals, the property of orthogonality of orientation matrices is used. The numerical simulation carried out made it possible to evaluate the advantages and disadvantages of each of the proposed methods.

Read full abstract
DEVELOPMENT OF METHOD AND SOFTWARE FOR COMPRESSION AND ENCRYPTION OF INFORMATION

Researches of the subject area of lossless information compression and with data loss are carried out and data compression algorithms with minimal redundancy are considered: Shannon-Fano coding, Huffman coding and compression using a dictionary: Lempel-Ziv coding. In the course of the work, the theoretical foundations of data compression were used, studies of various methods of data compression were carried out, the best methods of archiving with encryption and storage of various kinds of data were identified. The method of archiving data in the work is used for the purpose of safe and rational placement of information on external media and its protection from deliberate or accidental destruction or loss. In the Embarcadero RAD Studio XE8 integrated development environment, a software package for an archiver with code protection of information has been developed. The archiverʼs mechanism of operation is based on the creation and processing of streaming data. The core of the archiver is the function of compressing and decompressing files using the Lempel-Ziv method. As a method and means of protecting information in the archive, poly-alphabetic substitution (Viziner cipher) was used. The results of the work, in particular, the developed software can be practically used for archival storage of protected information; the mechanism of data archiving and encryption can be used in information transmission systems in order to reduce network traffic and ensure data security. The resulting encryption and archiving software was used in the module of the software package «Diplomas SNU v.2.6.1», which was developed at the Volodymyr Dal East Ukrainian National University. This complex is designed to create a unified register of diplomas at the university, automate the creation of files-diplomas of higher education in the multifunctional graphics editor Adobe Photoshop. The controller exports all data for analysis and formation of diplomas from the parameters of the corresponding XML files downloaded from the unified state education database in compressed zip archives. The developed module performs the process of unzipping and receiving XML-files with parameters for the further work of the complex «Diplomas SNU v.2.6.1».

Read full abstract
VALIDATION OF LAND DEGRADATION CARDS ON THE BASIS OF GEOSPATIAL DATA

Today there is a lot of satellite data and products based on it in the public domain. By integrating them with heterogeneous socio-economic information and soil maps, model biophysical data using modern machine learning methods and modern approaches to geospatial data processing, it becomes possible to create maps of land degradation. Considering that classification maps, productivity maps and deforestation maps are the main intellectual components to create a degradation map, it is these products that affect the overall reliability of the results. For their validation, the necessary quality metrics are determined in the work, and the corresponding calculations are made. To evaluate the land cover map, independent test data were used to build a confusion matrix, and the obtained areas of the main crops were compared with statistical data. Agricultural land productivity was estimated using time series land cover classification maps and Crop Growth Modeling System (CGMS) biophysical plant development, as well as biophysical plant growth parameters using satellite data and biophysical plant development models. The LAI Map Accuracy Assessment (based on CGMS) is based on the comparison of Leaf Area Index (LAI) values modeled using the CGMS software framework with LAI ground measurement data collected through ground surveys. Numerous experiments were carried out to assess the quality of models and the results of deforestation maps on an independent test sample, which was not used at the neural network training stage. Degradation maps for several years were also analyzed and their validation was carried out with respect to productivity, in particular for the region that has undergone significant changes for the territory of Ukraine.

Read full abstract
Open Access
NUMERICAL-ANALYTIC SOLUTION OF ONE MODELING PROBLEM OF FRACTIONAL-DIFFERENTIAL DYNAMICS OF COMPUTER VIRUSES

The paper considers the problem of modeling the dynamics of computer viruses spreading using a model based on the mathematical theory of biological epidemics. The urgency of the considered problem arises from the need to build effective anti-virus protection systems for computer networks based on the results of mathematical modeling of the spread of malicious software. We consider the SIES-model (Gan C., Yang X., Zhu Q.), that studies spread dynamics of computer viruses separating the influence of the action of computers accessible and unavailable on the Internet. In order to take into account non-local effects in this model, in particular memory effects, its modification on the ideas of the theory of fractional-order integro-differentiation is proposed. The technique of obtaining a numerical-analytical solution of the problem of modeling of computer viruses spread dynamics on the base of the fractional-differential counterpart of the SIES-model is presented. Closed forms solutions of the problems for the number of vulnerable and external computers are obtained, and a finite-difference scheme of the fractional Adams method for the problem of determining the number of infected computers is constructed. The results of computational experiments based on the developed technique of numerical-analytical solution show that there is a subdiffusion evolution of the system to the steady state. At the same time, for the number of external computers, a fast short-term growth is observed at the initial stages of process development with subsequent smooth and slow decrease towards the steady state. For medium and large values of the time variable, the evolution of the number of infected computers to the steady state occurs in an ultra-slow mode. Thus, the proposed technique makes it possible to study the families of dynamic reactions in the process of computer viruses spreading, including fast transient processes and ultra-slow evolution of systems with memory.

Read full abstract
QUANTITATIVE ASSESSMENT OF TECHNOLOGICAL SINGULARITY

The article deals with the topical issue of quantitative assessment of technological singularity. The authors made an analysis of artificial intelligence tools and approaches affecting the development of superintelligence, which allowed for the first time to develop a general multifactor model of technological singularity and present it in the space of direct and indirect indicators of development. The developed approach makes it possible to move from expert judgments on the issue of technological singularity in the form of extrapolated complexity curves of various systems or qualitative description of possible scenarios of technological development to quantitative assessment of the state of technological singularity. The links between the relevant functional areas of human intelligence and modern expert systems are formalized, a structural-functional model of knowledge acquisition is developed. A conclusion is made about the real limits of the processes of modern "intelligent" systems at the level of artificial thinking and logical cognition, which corresponds to a weak artificial intelligence. The state and ways of hardware development were analyzed, which allowed making a conclusion about the complex use of different hardware architectures and information processing principles: supercomputer, neurosynaptic and quantum computers to implement the concept of technological singularity. Formalized in the form of a structural model the areas of research most influential in the development of artificial intelligence, and their relationship to existing approaches and methods of processing big data. For the first time proposed the classification of indicators of development of artificial intelligence within two classes: direct and indirect, grouped into three groups: the intensity of research and public activity; the level of applied (technological) solutions; practical implementation, most affecting the development of general artificial intelligence. The correlation between the formalized groups of indicators was revealed, which confirms the correctness of the hypothesis about the cause-effect relationship between the groups: theoretical research → applied solutions → practical implementation and their mutual influence.

Read full abstract
ALGORITHMIC AND HARDWARE TOOLS FOR MOVING TARGETS DETECTION ON THE PROJECTION SCREEN FROM THE LASER EMITTER OF THE MULTIMEDIA TRAINER

The paper investigates the problems of creating a multimedia simulator with a laser emitter for training the correct use of various weapons in order to acquire the skills of targeted high-speed shooting. Specialized Software is designed to train novice shooters and improve the level of fire training in learning the techniques and rules of small arms shooting. The software component consists of separate software modules that allow the user to configure the interactive shooting range, create and run shooting exercises. Hitting the target is determined by the coincidence of the central pixel of the laser response (spot) on the projection image with one of the target pixels. Note that the laser emitter is attached to a real weapon and when you pull the trigger, a spot appears on the screen as a result of the shot. Along with the use of real weapons, computer graphics tools are used to create appropriate virtual environments and simulate a variety of situations in it, as close as possible to real conditions. Hardware and algorithmic shooting errors have been studied and methods for their reduction have been developed. Hardware depends on the parameters of the laser emitter, the size and characteristics of the projection screen, resolution and characteristics of the receiving camera-sensor. Algorithmic depends on the selected algorithm and frame processing methods. The hardware and algorithmic errors of shooting have been investigated and methods have been developed to reduce them. The analysis of the spots from the laser emitter on the projection image and the search for the centroid of the spot are carried out. An algorithm for determining the centroid of a spot through two-stage binarization of the image has been developed and tested, the thresholds of binarization that are optimal for solving the problem have been determined. The test results showed that the obtained accuracy is within the hardware component of the error, and the speed in decision-making is performed in real time.

Read full abstract
Open Access
THE SITUATION OF UNCERTAINTY THAT ARISES IN THE PROBLEMS OF SEMANTICS AND WAYS TO SOLVE IT

Various types of uncertainties that arise when solving semantics problems are considered. Decision theory investigates this situation involving incomplete input, current, and fuzzy information. But uncertainty in the problems of semantics has other manifestations. Its solution is carried out in different ways depending on its types. The problems of this class are related to recognition and when establishing the essence of certain objects, measures of similarity are introduced, which are a subjective assessment. For different measures, the values of the objective functions may differ due to the ambiguity of the result obtained for these functions or the chosen degree of similarity measures, and may not satisfy the purpose of the study. When choosing the result there is a situation of uncertainty. But with some measures of similarity, you can find a global solution. Such problems are divided into subclasses of solvable problems. Since the problems of semantics are reduced to combinatorial optimization problems, in which the argument of the objective function is combinatorial configurations, the situation of uncertainty may be related to the special structure of the set of combinatorial configurations. To solve it, it is necessary to enter several objective functions or to conduct optimization according to several criteria, which are reduced to a weighted criterion (linear convolution). Finding the optimal solution is carried out by self-tuning algorithms taking into account the constant and variable criteria, which are introduced in the process of solving the problem. That is, in the process of the algorithm generates additional current information (quality criteria), which affects the prediction of future results. The situation of uncertainty is manifested both due to developed fuzzy rules of information processing and evaluation and ambiguity in the choice of the optimal solution for several criteria in multicriteria optimization. To get out of this situation, self-tuning algorithms are developed, using the introduction of formal parameters in the process of solving the problem, which generates auxiliary current information that can not be specified in the input data. Also, subclasses of solvable problems are used to solve the situation of uncertainty, the reference library is structured to reduce unsolvable problems to solvable ones.

Read full abstract
ON THE STABILITY OF DYNAMIC SYSTEMS WITH CERTAIN SWITCHINGS, WHICH CONSISTS OF LINEAR SUBSYSTEMS WITHOUT DELAY

This work is devoted to the further development of the study of the stability of dynamic systems with switchings. There are many different classes of dynamical systems described by switched equations. The authors of the work divide systems with switches into two classes. Namely, on systems with definite and indefinite switchings. In this paper, the system with certain switching, namely a system composed of differential and difference sub-systems with the condition of decreasing Lyapunov function. One of the most versatile methods of studying the stability of the zero equilibrium state is the second Lyapunov method, or the method of Lyapunov functions. When using it, a positive definite function is selected that satisfies certain properties on the solutions of the system. If a system of differential equations is considered, then the condition of non-positiveness (negative definiteness) of the total derivative due to the system is imposed. If a difference system of equations is considered, then the first difference is considered by virtue of the system. For more general dynamical systems (in particular, for systems with switchings), the condition is imposed that the Lyapunov function does not increase (decrease) along the solutions of the system. Since the paper considers a system consisting of differential and difference subsystems, the condition of non-increase (decrease of the Lyapunov function) is used.For a specific type of subsystems (linear), the conditions for not increasing (decreasing) are specified. The basic idea of using the second Lyapunov method for systems of this type is to construct a sequence of Lyapunov functions, in which the level surfaces of the next Lyapunov function at the switching points are either «stitched» or «contain the level surface of the previous function».

Read full abstract