Information Technology of Jamming Cancellation
Introduction. Impact of the jamming leads to the high losses since it decreases effectiveness of radiolocation systems, anti-aircraft missile systems and communication systems. Strategies of forming and setting of the jamming are improving and the power of the jamming increases. In this regard, it is important to improve jamming cancellation systems. The task of the improvement for based on matrix calculations methods of the jamming cancellation is actual considering the breakthrough development of the computational methods which allows realization by digital circuit engineering. These include the most modern machine learning algorithms aimed at solving signal processing tasks. The requirement of the stable operation is important for the jamming cancellation systems under conditions of uncertainty. Other demand is an operation in the real time and a simple hardware implementation. The purpose of the paper is to increase the efficiency of the jamming cancellation in the antenna system (under conditions of uncertainty) based on the new randomized computation methods and their realization by the matrix-processor architecture. Results. The approach based on singular value decomposition and random projection is proposed. It provides effective jamming cancellation in the antenna systems under conditions of uncertainty that is, the sample has small length, there is an own noise of the measuring system, the input-output transformation matrix have undefined numerical rank and there is no prior information about useful signal. Communication. The increase of the efficiency of the jamming cancellation includes the increase of the stability and jamming cancellation coefficient, and the reduction of the computational complexity. The increase of the jamming cancellation coefficient is provided by use of stable discrete ill-posed inverse problems solution methods of the signal recovery based on random projection and singular value decomposition. The decrease of the computational complexity is achieved by the realization of random projection and singular value decomposition as the processor array which makes parallel computations. Keywords: information technology, jamming, machine learning, algorithms, singular value decomposition, random projection, conditions of uncertainty, signal recovery.
- Book Chapter
4
- 10.1007/978-3-319-70581-1_31
- Nov 22, 2017
A brief overview of our work on the solution of DIP is presented. The stable solution of DIP we obtained, by truncated singular value decomposition and by random projection methods. Analytic and experimental averaging over random matrices for the evaluation of the error of true signal recovery is carried out for the method of solving the discrete ill-posed problems on the basis of random projection. Averaging over random matrices leads to diagonalization of the matrix conditioning both components of the error (deterministic and stochastic). The values of the diagonal elements change monotonically as a function of k. This in turn leads to the smoother characteristics and reducing the number of local minima. The results of the experimental study showed the connection of the elements of the diagonalized matrix with the singular values of the original matrix. This provides the basis for investigating the connection of the truncated singular value decomposition method and the random projection method.
- Research Article
17
- 10.1108/17563781111160020
- Aug 23, 2011
- International Journal of Intelligent Computing and Cybernetics
PurposeThe purpose of this paper is to introduce a new hybrid method for reducing dimensionality of high dimensional data.Design/methodology/approachLiterature on dimensionality reduction (DR) witnesses the research efforts that combine random projections (RP) and singular value decomposition (SVD) so as to derive the benefit of both of these methods. However, SVD is well known for its computational complexity. Clustering under the notion of concept decomposition is proved to be less computationally complex than SVD and useful for DR. The method proposed in this paper combines RP and fuzzy k‐means clustering (FKM) for reducing dimensionality of the data.FindingsThe proposed RP‐FKM is computationally less complex than SVD, RP‐SVD. On the image data, the proposed RP‐FKM has produced less amount of distortion when compared with RP. The proposed RP‐FKM provides better text retrieval results when compared with conventional RP and performs similar to RP‐SVD. For the text retrieval task, superiority of SVD over other DR methods noted here is in good agreement with the analysis reported by Moravec.Originality/valueThe hybrid method proposed in this paper, combining RP and FKM, is new. Experimental results indicate that the proposed method is useful for reducing dimensionality of high‐dimensional data such as images, text, etc.
- Supplementary Content
- 10.1088/0266-5611/14/5/020
- Oct 1, 1998
- Inverse Problems
The Newsletter is a key element in further enhancing the value of the journal to the inverse problems community. So why not be a part of this exciting forum by sending to our Bristol office material suitable for inclusion under any of the categories mentioned above. Your contributions will be very welcome. Book review Rank-Deficient and Discrete Ill-Posed Problems, Numerical Aspects of Linear Inversion P C Hansen 1998 Philadelphia: SIAM 247 pp ISBN 0-89871-403-6 $44.00 For more than ten years, Per Christian Hansen has been one of the leading experts in the field of numerical linear algebra for discrete ill-posed problems. In this monograph he presents a detailed and comprehensive description of the state-of-the-art techniques for solving linear equations for this class of ill-conditioned problems. In the first chapter, the author carefully distinguishes between rank-deficient and (discrete) ill-posed problems Ax=b in terms of the singular values of the matrix A. Rank-deficient problems are characterized by those matrices having a cluster of small singular values which are well separated from the other singular values. Ill-posed problems do not have this property, i.e. the singular values decay gradually to zero with no particular gap between them. They arise naturally from the discretization of integral equations of the first kind while rank-deficient matrices can be considered as perturbations of singular but well-conditioned matrices B in the sense that is not too large. (Here denotes the Moore - Penrose inverse of B.) In section 1.2 he summarizes basic facts on ill-posed problems described by integral equations of the first kind. Emphasis is placed on singular value decomposition as the basic tool for its analysis. The principle of regularization is introduced for both continuous and discrete problems. At the end of this chapter the author formulates four test examples: the first, from signal processing, leads to a rank-deficient problem, the second is the computation of the second derivative and is the discretization of a modestly ill-posed problem. The last two examples are discretizations of integral equations with analytic kernels and lead to highly ill-posed problems. Chapter 2 presents the essential tools for the analysis of discrete ill-posed problems which are fundamental throughout the book. A central role is played, as indicated above, by singular value decomposition and its various generalizations including a brief survey on their numerical computations. In chapter 3, rank-deficient problems are treated. For this class of problems the notion of the numerical -rank of a matrix makes sense and leads to a natural regularization concept. Perturbation bounds and the efficient computation by truncated SVD or QR decompositions are explained and illustrated by a number of numerical examples. In contrast, the regularization of ill-posed problems derived from the discretization of integral equations of the first kind is more difficult. In chapter 4, the author introduces filter factors, the resolution matrix, and the L-curve approach as useful regularization tools. The author distinguishes between direct and iterative regularization methods. In chapter 5 he studies the first class of methods among which the Tikhonov regularization is the most popular. For each of these methods he formulates perturbation bounds and discusses the numerical aspects. In the reviewer's opinion, a highlight of this chapter is the illustrative examples at the end where the different regularization schemes are compared with each other. The iterative regularization methods are the subject of chapter 6. Emphasis is placed on conjugate gradient methods. The implementation of the standard CG algorithm by means of the Lanczos bidiagonalization algorithm is explained and various modifications are discussed. The author reviews results on regularization and convergence properties and devotes one section to the Lanczos bidiagonalization algorithm in finite precision. Finally, chapter 7 surveys methods for choosing the regularization parameter, a problem of obvious importance. Classical discrepancy principles are discussed as well as more recent modifications, methods based on error estimation, generalized cross-validation, and the L-curve criterion. An impressive bibliography of 378 references completes this monograph. As the author points out in the preface, this book is not intended to be an introduction into either the field of inverse problems or numerical linear algebra. It is not a textbook. The monograph contains almost no proofs but always references to the literature. It is the purpose of the book to give a survey of state-of-the-art numerical methods for solving rank-deficient or discrete ill-posed problems. In the referee's opinion there is no other book around which serves this goal even nearly as well as this one. This work truly fills a gap in the literature on inverse problems and ill-posed problems and is strongly recommended for every applied mathematician or engineer who has to solve rank-deficient or ill-posed problems numerically. A Kirsch University of Karlsruhe
- Conference Article
- 10.1109/ismw.2007.19
- Dec 10, 2007
Random projection (RP) is a common technique for dimensionality reduction under L2 norm for which many significant space embedding results have been demonstrated. In particular, random projection techniques could yield sharp results for Rd under L2 norm in time linear to the number of data points and dimensionality in question only. Inspired by the use of symmetric probability distributions in previous work, we propose a RP algorithm based on the hyper- spherical symmetry and give its probabilistic analyses based on beta and Gaussian distribution. Later, we present evaluations of our algorithm with other RP algorithms as well as the singular value decomposition (SVD). In particular, we benchmark our results via cosine similarity and L2 norm on a image collection.
- Research Article
- 10.31673/2412-9070.2020.066465
- Jan 1, 2020
- Connectivity
The article analyzes the method of singular value decomposition (SVD) as an effective way to build a recommender system. With the development of information technologies and their introduction into public life, there is a need to search for accentuated information in conditions of uncertainty. To solve such problems, recently created intelligent recommendation systems [1]. The popularity of recommendation systems is growing in every segment of goods and services, in particular music. From a socio-economic point of view, such systems are the main tool for the dissemination of new compositions in the field of music promotes the promotion of these compositions in accordance with the preferences of the target audience and encourages users to purchase new music tracks. In addition, such systems significantly reduce the time and facilitate the search for appropriate musical compositions under conditions of uncertainty. The main problem in developing machine learning algorithms is the lack of an individual approach to each user. All recommendations are based on the statistical behavior of the majority, resulting in a percentage of people who do not receive recommendations that match their personal preferences. In the case of a separate analysis of each of the users and the implementation of recommendations in accordance with their personal use of Internet resources, the number of quality and more accurate proposals in the list of recommendations would increase significantly. Machine learning methods are effectively used to build recommendation systems, namely: the k-nearest neighbors method, the Bayesian algorithm and the singular matrix decomposition method. Among these methods, the SVD method is the most widely used in practice. This method is used to reduce the number of non-significant data set factors. Factors in recommendation systems are properties that describe the user or subject. In music recommendation systems, this can be a genre. SVD reduces the dimension of the matrix by removing its hidden factors.
- Research Article
- 10.15587/1729-4061.2021.239195
- Aug 31, 2021
- Eastern-European Journal of Enterprise Technologies
A conceptual model for analyzing the dynamics of the value of the project, achieved as a result of engineering, under conditions of uncertainty has been developed. In the methodological context, the proposed approach is based on an array of isovalues, each of which corresponds to its own level of optimism in forecasting the cash flow for the project. With the increase in the efficiency of the project due to engineering, the entire array of iso-value lines’ changes its geometrical position, moving further from the origin (in the four-dimensional space "time-benefit-cost-risk"). The proposed model includes three stages. At the first stage, input information is collected and the corresponding analysis is initiated. The result of the second stage is a multivariate cash flow forecast and calculation of the benefit-cost ratio (BCR) and its changes for each scenario. The third stage provides the calculation of the expected BCR and its change, an assessment of the risk of making an erroneous decision and changing this risk as a result of the engineering session. The model makes it possible to calculate the achieved proportion of the static and dynamic vectors of change in the value of the project, which is one of the key manifestations of the scientific novelty of the work. In the example considered, the share of the dynamic vector of growth in the value of the project was found to be 35.47 %. The model has an environmental property - the assessment of the success of value engineering under conditions of uncertainty is carried out on the basis of the annual total benefits and the annual total costs throughout the project cycle. Thus, the analysis takes into account the impact of the project on the environment, which is reflected in the risk assessment. The given case testifies to the feasibility of applying the model in the practice of engineering the value of construction projects.
- Research Article
4
- 10.1142/s1793351x12500055
- Sep 1, 2012
- International Journal of Semantic Computing
The aim of this paper is to provide a comparison of various algorithms and parameters to build reduced semantic spaces. The effect of dimension reduction, the stability of the representation and the effect of word order are examined in the context of the five algorithms bearing on semantic vectors: Random projection (RP), singular value decomposition (SVD), non-negative matrix factorization (NMF), permutations and holographic reduced representations (HRR). The quality of semantic representation was tested by means of synonym finding task using the TOEFL test on the TASA corpus. Dimension reduction was found to improve the quality of semantic representation but it is hard to find the optimal parameter settings. Even though dimension reduction by RP was found to be more generally applicable than SVD, the semantic vectors produced by RP are somewhat unstable. The effect of encoding word order into the semantic vector representation via HRR did not lead to any increase in scores over vectors constructed from word co-occurrence in context information. In this regard, very small context windows resulted in better semantic vectors for the TOEFL test.
- Research Article
34
- 10.1137/19m1237016
- Jan 1, 2020
- SIAM Journal on Matrix Analysis and Applications
This paper is devoted to the computation of low multilinear rank approximations of tensors. Combining the stretegy of power scheme, random projection, and singular value decomposition, we derive a ...
- Research Article
4
- 10.1038/s41598-023-33030-4
- Apr 13, 2023
- Scientific Reports
In order to reduce the risk of data privacy disclosure and improve the effect of information privacy protection, a differential privacy protection algorithm for network sensitive information based on singular value decomposition is proposed. TF-IDF method is used to extract network sensitive information text. By comparing the word frequency of network sensitive information, high word frequency word elements in network information content are collected to obtain the mining results of network sensitive information text. According to the decision tree theory, the equal difference privacy budget allocation mechanism is improved to achieve equal difference privacy budget allocation. By discarding some small singular values and corresponding spectral vectors, the data can be disturbed, and the availability of the original data can be retained, so that it can truly represent the original data set structure. According to the results of equal difference privacy budget allocation and singular value decomposition disturbance, the data of high-dimensional network graph is reduced by random projection, singular value decomposition is performed on the reduced data, and Gaussian noise is added to the singular value. Finally, the matrix to be published is generated through the inverse operation of singular value decomposition to achieve differential privacy protection of network sensitive information. The experimental results show that the privacy protection quality of this algorithm is high and the data availability is effectively improved.
- Research Article
38
- 10.1109/lgrs.2016.2581172
- Sep 1, 2016
- IEEE Geoscience and Remote Sensing Letters
While data-dependent dimensionality reduction has dominated in many applications of hyperspectral imagery, there is increasing interest in data-independent strategies—such as random projections—due to their promise for reduced computational complexity as well as their demonstrated ability to preserve application-important information. Such random-projection-based dimensionality reduction is investigated in the specific context of supervised hyperspectral classification. Both Hadamard- and Gaussian-based random projections are considered, applied alone as well as incorporated into a fast approximate singular value decomposition (SVD). Experimental results reveal that the proposed Hadamard-based random projection with the fast SVD (FSVD) offers a computationally attractive alternative to not only traditional SVD but also Gaussian-based FSVD for dimensionality reduction in hyperspectral classification.
- Research Article
- 10.1101/2025.02.04.636499
- Feb 8, 2025
- bioRxiv : the preprint server for biology
Principal Component Analysis (PCA) has long been a cornerstone in dimensionality reduction for high-dimensional data, including single-cell RNA sequencing (scRNA-seq). However, PCA's performance typically degrades with increasing data size, can be sensitive to outliers, and assumes linearity. Recently, Random Projection (RP) methods have emerged as promising alternatives, addressing some of these limitations. This study systematically and comprehensively evaluates PCA and RP approaches, including Singular Value Decomposition (SVD) and randomized SVD, alongside Sparse and Gaussian Random Projection algorithms, with a focus on computational efficiency and downstream analysis effectiveness. We benchmark performance using multiple scRNA-seq datasets including labeled and unlabeled publicly available datasets. We apply Hierarchical Clustering and Spherical K-Means clustering algorithms to assess downstream clustering quality. For labeled datasets, clustering accuracy is measured using the Hungarian algorithm and Mutual Information. For unlabeled datasets, the Dunn Index and Gap Statistic capture cluster separation. Across both dataset types, the Within-Cluster Sum of Squares (WCSS) metric is used to assess variability. Additionally, locality preservation is examined, with RP outperforming PCA in several of the evaluated metrics. Our results demonstrate that RP not only surpasses PCA in computational speed but also rivals and, in some cases, exceeds PCA in preserving data variability and clustering quality. By providing a thorough benchmarking of PCA and RP methods, this work offers valuable insights into selecting optimal dimensionality reduction techniques, balancing computational performance, scalability, and the quality of downstream analyses.
- Research Article
1
- 10.1016/j.spl.2020.108743
- Feb 29, 2020
- Statistics & Probability Letters
A consistency theorem for randomized singular value decomposition
- Research Article
6
- 10.1177/1475921720960196
- Oct 15, 2020
- Structural Health Monitoring
Over the last several decades, structural health monitoring systems have grown into increasingly diverse applications. Structural health monitoring excels with large data sets that can capture the typical variability, novel events, and undesired degradation over time. As a result, the efficient storage and processing of these large, guided wave data sets have become a key feature for successful application of structural health monitoring. This article describes a series of investigations into the use of random projection theory to significantly reduce storage burdens and improve computational complexity while not significantly affecting common damage detection strategies. Random projections are used as a lossy compression scheme that approximately retains metrics of distance or similarity between data records. Random projection compression is evaluated using a large 1,440,000 measurement data set, which was collected over 5 months in an unprotected outdoor environment. Accurate damage detection, after the compression process, is achieved through correlation analysis and singular value decomposition. The results indicate consistent detection performance with over 95% of storage compression and more than a 477 times speed improvement in computational cost for singular value decomposition–based damage detection.
- Research Article
19
- 10.1155/2016/6529794
- Jan 1, 2016
- Mathematical Problems in Engineering
Because of its positive effects on dealing with the curse of dimensionality in big data, random projection for dimensionality reduction has become a popular method recently. In this paper, an academic analysis of influences of random projection on the variability of data set and the dependence of dimensions has been proposed. Together with the theoretical analysis, a new fuzzyc-means (FCM) clustering algorithm with random projection has been presented. Empirical results verify that the new algorithm not only preserves the accuracy of original FCM clustering, but also is more efficient than original clustering and clustering with singular value decomposition. At the same time, a new cluster ensemble approach based on FCM clustering with random projection is also proposed. The new aggregation method can efficiently compute the spectral embedding of data with cluster centers based representation which scales linearly with data size. Experimental results reveal the efficiency, effectiveness, and robustness of our algorithm compared to the state-of-the-art methods.
- Research Article
209
- 10.1109/tit.2014.2375327
- Feb 1, 2015
- IEEE Transactions on Information Theory
We study the topic of dimensionality reduction for $k$ -means clustering. Dimensionality reduction encompasses the union of two approaches: 1) feature selection and 2) feature extraction. A feature selection-based algorithm for $k$ -means clustering selects a small subset of the input features and then applies $k$ -means clustering on the selected features. A feature extraction-based algorithm for $k$ -means clustering constructs a small set of new artificial features and then applies $k$ -means clustering on the constructed features. Despite the significance of $k$ -means clustering as well as the wealth of heuristic methods addressing it, provably accurate feature selection methods for $k$ -means clustering are not known. On the other hand, two provably accurate feature extraction methods for $k$ -means clustering are known in the literature; one is based on random projections and the other is based on the singular value decomposition (SVD). This paper makes further progress toward a better understanding of dimensionality reduction for $k$ -means clustering. Namely, we present the first provably accurate feature selection method for $k$ -means clustering and, in addition, we present two feature extraction methods. The first feature extraction method is based on random projections and it improves upon the existing results in terms of time complexity and number of features needed to be extracted. The second feature extraction method is based on fast approximate SVD factorizations and it also improves upon the existing results in terms of time complexity. The proposed algorithms are randomized and provide constant-factor approximation guarantees with respect to the optimal $k$ -means objective value.
- Ask R Discovery
- Chat PDF
AI summaries and top papers from 250M+ research sources.