Articles published on Greedy approximation
Authors
Select Authors
Journals
Select Journals
Duration
Select Duration
238 Search results
Sort by Recency
- New
- Research Article
- 10.22331/q-2025-11-17-1911
- Nov 17, 2025
- Quantum
- Aniruddha Sen + 2 more
Quantum networks are important for quantum communication, enabling tasks such as quantum teleportation, quantum key distribution, quantum sensing, and quantum error correction, often utilizing graph states, a specific class of multipartite entangled states that can be represented by graphs. We propose a novel approach for distributing graph states across a quantum network. We show that the distribution of graph states can be characterized by a system of subgraph complementations, which we also relate to the minimum rank of the underlying graph and the degree of entanglement quantified by the Schmidt-rank of the quantum state. We analyze resource usage for our algorithm and show that it improves on the number of qubits, bits for classical communication, and EPR pairs utilized, as compared to prior work. In fact, the number of local operations and resource consumption for our approach scales linearly in the number of vertices. This produces a quadratic improvement in completion time for several classes of graph states represented by dense graphs, which translates into an exponential improvement by allowing parallelization of gate operations. This leads to improved fidelities in the presence of noisy operations, as we show through simulation in the presence of noisy operations. We classify common classes of graph states, along with their optimal distribution time using subgraph complementations. We find a sequence of subgraph complementation operations to distribute an arbitrary graph state which we conjecture is close to the optimal sequence, and establish upper bounds on distribution time along with providing approximate greedy algorithms.
- Research Article
- 10.54097/az8w1s78
- Mar 30, 2025
- Highlights in Science, Engineering and Technology
- Houyu Lin
The feedback arc set of a directed graph is the subset of the graph that consists of at least one edge from every cycle in the graph. Removing the fewest number of these edges gives the minimum feedback arc set. Three methods that attempt to solve this problem use equivalency and NP reductions, an approximate greedy algorithm, and an integer linear program for an exact solution. The minimum FAS can be applied to real world events such as sporting tournaments and ranked voting.
- Research Article
- 10.4213/rm10245
- Jan 1, 2025
- Uspekhi Matematicheskikh Nauk
- Vladimir Nikolaevich Temlyakov
Sparse approximation is important in many applications because of the concise form of an approximant and good accuracy guarantees. The theory of compressed sensing, which proved to be very useful in the image processing and data sciences, is based on the concept of sparsity. A fundamental issue of sparse approximation is the problem of the construction of efficient algorithms, which provide good approximation. It turns out that greedy algorithms with respect to dictionaries are very good from this point of view. They are simple in implementation, and there are well-developed theoretical guarantees of their efficiency. This survey/tutorial paper contains a brief description of different kinds of greedy algorithms and results on their convergence and rate of convergence. Also, in Sections 14 and 15 we give some typical proofs of convergence and rate of convergence results for important greedy algorithms and in Section 16 we list some open problems. Bibliography: 91 titles.
- Research Article
- 10.4213/im9608e
- Jan 1, 2025
- Izvestiya: Mathematics
- Iurii Petrovich Svetlov
We consider a new version of a greedy algorithm in biorthogonal systems in separable Banach spaces. We consider approximations of an element $f$ via $m$-term greedy sum, which is constructed from the expansion by choosing the first $m$ greatest in absolute value coefficients. It is known that the greedy algorithm does not always converge to the original element. We prove a theorem showing that the new version of a greedy algorithm (called the regularized greedy algorithm) always converges to the original element in Efimov-Stechkin spaces. We also construct examples that show the significance of the conditions of the main theorem.
- Research Article
- 10.1007/s10878-024-01229-4
- Oct 28, 2024
- Journal of Combinatorial Optimization
- Hao Zhong
On greedy approximation algorithm for the minimum resolving dominating set problem
- Research Article
- 10.1007/s00453-024-01268-7
- Sep 12, 2024
- Algorithmica
- Piotr Krysta + 2 more
Ultimate Greedy Approximation of Independent Sets in Subcubic Graphs
- Research Article
- 10.1128/msphere.00139-24
- Jul 30, 2024
- mSphere
- Sara Rahiminejad + 3 more
Gene knockout studies suggest that ~300 genes in a bacterial genome and ~1,100 genes in a yeast genome cannot be deleted without loss of viability. These single-gene knockout experiments do not account for negative genetic interactions, when two or more genes can each be deleted without effect, but their joint deletion is lethal. Thus, large-scale single-gene deletion studies underestimate the size of a minimal gene set compatible with cell survival. In yeast Saccharomyces cerevisiae, the viability of all possible deletions of gene pairs (2-tuples), and of some deletions of gene triplets (3-tuples), has been experimentally tested. To estimate the size of a yeast minimal genome from that data, we first established that finding the size of a minimal gene set is equivalent to finding the minimum vertex cover in the lethality (hyper)graph, where the vertices are genes and (hyper)edges connect k-tuples of genes whose joint deletion is lethal. Using the Lovász-Johnson-Chvatal greedy approximation algorithm, we computed the minimum vertex cover of the synthetic-lethal 2-tuples graph to be 1,723 genes. We next simulated the genetic interactions in 3-tuples, extrapolating from the existing triplet sample, and again estimated minimum vertex covers. The size of a minimal gene set in yeast rapidly approaches the size of the entire genome even when considering only synthetic lethalities in k-tuples with small k. In contrast, several studies reported successful experimental reductions of yeast and bacterial genomes by simultaneous deletions of hundreds of genes, without eliciting synthetic lethality. We discuss possible reasons for this apparent contradiction.IMPORTANCEHow can we estimate the smallest number of genes sufficient for a unicellular organism to survive on a rich medium? One approach is to remove genes one at a time and count how many of such deletion strains are unable to grow. However, the single-gene knockout data are insufficient, because joint gene deletions may result in negative genetic interactions, also known as synthetic lethality. We used a technique from graph theory to estimate the size of minimal yeast genome from partial data on synthetic lethality. The number of potential synthetic lethal interactions grows very fast when multiple genes are deleted, revealing a paradoxical contrast with the experimental reductions of yeast genome by ~100 genes, and of bacterial genomes by several hundreds of genes.
- Research Article
2
- 10.1016/j.jfa.2024.110594
- Jul 22, 2024
- Journal of Functional Analysis
- Fernando Albiac + 2 more
The main results in this paper contribute to bringing to the fore novel underlying connections between the contemporary concepts and methods springing from greedy approximation theory with the well-established techniques of classical Banach spaces. We do that by showing that bounded-oscillation unconditional bases, introduced by Dilworth et al. in 2009 in the setting of their search for extraction principles of subsequences verifying partial forms of unconditionality, are the same as truncation quasi-greedy bases, a new breed of bases that appear naturally in the study of the performance of the thresholding greedy algorithm in Banach spaces. We use this identification to provide examples of bases that exhibit that bounded-oscillation unconditionality is a stronger condition than Elton's near unconditionality. We also take advantage of our arguments to provide examples that allow us to tell apart certain types of bases that verify either debilitated unconditionality conditions or weaker forms of quasi-greediness in the context of abstract approximation theory.
- Research Article
- 10.3390/math12132111
- Jul 5, 2024
- Mathematics
- Robin Herkert + 5 more
We address the challenging application of 3D pore scale reactive flow under varying geometry parameters. The task is to predict time-dependent integral quantities, i.e., breakthrough curves, from the given geometries. As the 3D reactive flow simulation is highly complex and computationally expensive, we are interested in data-based surrogates that can give a rapid prediction of the target quantities of interest. This setting is an example of an application with scarce data, i.e., only having a few available data samples, while the input and output dimensions are high. In this scarce data setting, standard machine learning methods are likely to fail. Therefore, we resort to greedy kernel approximation schemes that have shown to be efficient meshless approximation techniques for multivariate functions. We demonstrate that such methods can efficiently be used in the high-dimensional input/output case under scarce data. Especially, we show that the vectorial kernel orthogonal greedy approximation (VKOGA) procedure with a data-adapted two-layer kernel yields excellent predictors for learning from 3D geometry voxel data via both morphological descriptors or principal component analysis.
- Research Article
3
- 10.1016/j.matcom.2024.05.012
- May 18, 2024
- Mathematics and Computers in Simulation
- Alessandro Mazzoccoli + 2 more
In this paper, we consider a step function characterized by a real-valued sequence and its linear expansion representation constructed via the matching pursuit (MP) algorithm. We utilize a waveform dictionary based on the triangular function as part of this algorithm and representation. The waveform dictionary is comprised of waveforms localized in the time–frequency domain. In view of this, we prove that the triangular waveforms are more efficient than the rectangular waveforms used in a prior study by achieving a product of variances in the time–frequency domain closer to the lower bound of the Heisenberg Uncertainty Principle. We provide a MP algorithm solvable in polynomial time, contrasting the common exponential time when using Gaussian windows. We apply this algorithm on simulated data and real GDP data from 1947–2024 to demonstrate its application and efficiency.
- Research Article
5
- 10.3390/math12101546
- May 15, 2024
- Mathematics
- Raphael Zaccone
The advancement of autonomous capabilities in maritime navigation has gained significant attention, with a trajectory moving from decision support systems to full autonomy. This push towards autonomy has led to extensive research focusing on collision avoidance, a critical aspect of safe navigation. Among the various possible approaches, dynamic programming is a promising tool for optimizing collision avoidance maneuvers. This paper presents a DP formulation for the collision avoidance of autonomous vessels. We set up the problem framework, formulate it as a multi-stage decision process, define cost functions and constraints focusing on the actual requirements a marine maneuver must comply with, and propose a solution algorithm leveraging parallel computing. Additionally, we present a greedy approximation to reduce algorithm complexity. We put the proposed algorithms to the test in realistic navigation scenarios and also develop an extensive test on a large set of randomly generated scenarios, comparing them with the RRT* algorithm using performance metrics proposed in the literature. The results show the potential benefits of an autonomous navigation or decision support framework.
- Research Article
- 10.3390/network4020009
- May 6, 2024
- Network
- Sadaf Ul Zuhra + 3 more
The escalating demand for high-quality video streaming poses a major challenge for communication networks today. Catering to these bandwidth-hungry video streaming services places a huge burden on the limited spectral resources of communication networks, limiting the resources available for other services as well. Large volumes of video traffic can lead to severe network congestion, particularly during live streaming events, which require sending the same content to a large number of users simultaneously. For such applications, multicast transmission can effectively combat network congestion while meeting the demands of all the users by serving groups of users requesting the same content over shared spectral resources. Streaming services can further benefit from multi-connectivity, which allows users to receive content from multiple base stations simultaneously. Integrating multi-connectivity within multicast streaming can improve the system resource utilization while also providing seamless connectivity to multicast users. Toward this end, this work studied the impact of using multi-connectivity (MC) alongside wireless multicast for meeting the resource requirements of video streaming. Our findings show that MC substantially enhances the performance of multicast streaming, particularly benefiting cell-edge users who often experience poor channel conditions. We especially considered the number of users that can be simultaneously served by multi-connected multicast systems. It was observed that about 60% of the users that are left unserved under single-connectivity multicast are successfully served using the same resources by employing multi-connectivity in multicast transmissions. We prove that the optimal resource allocation problem for MC multicast is NP-hard. As a solution, we present a greedy approximation algorithm with an approximation factor of (1−1/e). Furthermore, we establish that no other polynomial-time algorithm can offer a superior approximation. To generate realistic video traffic patterns in our simulations, we made use of traces from actual videos. Our results clearly demonstrate that multi-connectivity leads to significant enhancements in the performance of multicast streaming.
- Research Article
- 10.1142/s0219691324500085
- Apr 22, 2024
- International Journal of Wavelets, Multiresolution and Information Processing
- Dilinuer Tuoheti + 2 more
This note designs a statistic model involving TM system with Gaussian noise. Based on adaptive approximation of the parameters in TM system and greedy choices for the generalized Fourier coefficients, a regression function is determined for this model from the finite observed data. Finally its statistical properties are proved.
- Research Article
1
- 10.3390/rs16030506
- Jan 28, 2024
- Remote Sensing
- Shaoming Wei + 6 more
To realize multitarget trajectory tracking under non-Gaussian heavy-tailed noise, we propose a Gaussian–Student t-mixture distribution-based trajectory cardinality probability hypothesis density filter (GSTM-TCPHD). We introduce the multi-sensor GSTM-TCPHD (MS-GSTM-TCPHD) filter to enhance tracking performance. Conventional cardinality probability hypothesis density (CPHD) filters typically assume Gaussian noise and struggle to accurately establish target trajectories when faced with heavy-tailed non-Gaussian distributions. Heavy-tailed noise leads to significant estimation errors and filter dispersion. Moreover, the exact trajectory of the target is crucial for tracking and prediction. Our proposed GSTM-TCPHD filter utilizes the GSTM distribution to model heavy-tailed noise, reducing modeling errors and generating a set of potential target trajectories. Since single sensors have a limited field of view and limited measurement information, we extend the filter to a multi-sensor scenario. To tackle the issue of data explosion from multiple sensors, we employed a greedy approximation method to assess measurements and introduced the MS-GSTM-TCPHD filter. The simulation results demonstrate that our proposed filter outperforms the CPHD/TCPHD filter and Student’s t-based TCPHD filter in terms of accurately estimating the trajectories of multiple targets during tracking while also achieving improved accuracy and shorter processing time.
- Research Article
1
- 10.4213/rm10186e
- Jan 1, 2024
- Russian Mathematical Surveys
- Alexander Vladimirovich Gasnikov + 1 more
The general theory of greedy approximation with respect to arbitrary dictionaries is well developed in the case of real Banach spaces. Recently some results proved for the Weak Chebyshev Greedy Algorithm (WCGA) in the case of real Banach spaces were extended to the case of complex Banach spaces. In this paper we extend some of the results known in the real case for greedy algorithms other than the WCGA to the case of complex Banach spaces. Bibliography: 25 titles.
- Research Article
- 10.4213/rm10186
- Jan 1, 2024
- Uspekhi Matematicheskikh Nauk
- Alexander Vladimirovich Gasnikov + 1 more
The general theory of greedy approximation with respect to arbitrary dictionaries is well developed in the case of real Banach spaces. Recently some results proved for the Weak Chebyshev Greedy Algorithm (WCGA) in the case of real Banach spaces were extended to the case of complex Banach spaces. In this paper we extend some of the results known in the real case for greedy algorithms other than the WCGA to the case of complex Banach spaces. Bibliography: 25 titles.
- Research Article
1
- 10.5565/publmat6812411
- Jan 1, 2024
- Publicacions Matemàtiques
- Guillermo Rey
We describe a greedy algorithm that approximates the Carleson constant of a collection of general sets. The approximation has a logarithmic loss in a general setting, but is optimal up to a constant with only mild geometric assumptions. The constructive nature of the algorithm gives additional information about the almostdisjoint structure of sparse collections. As applications, we give three results for collections of axis-parallel rectangles in every dimension. The first is a constructive proof of the equivalence between Carleson and sparse collections, first shown by Hänninen. The second is a structure theorem proving that every collection E can be partitioned into O(N ) sparse subfamilies where N is the Carleson constant of E. We also give examples showing that such a decomposition is impossible when the geometric assumptions are dropped. The third application is a characterization of the Carleson constant involving only L 1,∞ estimates.
- Research Article
1
- 10.1109/tac.2023.3269323
- Jan 1, 2024
- IEEE Transactions on Automatic Control
- Arthur Castello B De Oliveira + 2 more
We develop some basic principles for the design and robustness analysis of a continuous-time bilinear dynamical network, where an attacker can manipulate the strength of the interconnections/edges between some of the agents/nodes. We formulate the edge protection optimization problem of picking a limited number of attack-free edges and minimizing the impact of the attack over the bilinear dynamical network. In particular, the <inline-formula xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"><tex-math notation="LaTeX">$\mathcal {H}_{2}$</tex-math></inline-formula> -norm of bilinear systems is known to capture robustness and performance properties analogous to its linear counterpart and provides valuable insights for identifying which edges are most sensitive to attacks. The exact optimization problem is combinatorial in the number of edges, and brute-force approaches show poor scalability. However, we show that the <inline-formula xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"><tex-math notation="LaTeX">$\mathcal {H}_{2}$</tex-math></inline-formula> -norm as a cost function is supermodular and, therefore, allows for efficient greedy approximations of the optimal solution. We illustrate and compare the effectiveness of our theoretical findings via numerical simulations.
- Research Article
3
- 10.1016/j.bpj.2024.01.008
- Jan 1, 2024
- Biophysical Journal
- Alex Rojewski + 4 more
An accurate probabilistic step finder for time-series analysis
- Research Article
3
- 10.1016/j.knosys.2023.111070
- Oct 14, 2023
- Knowledge-Based Systems
- Weihang Zhang + 3 more
Knowledge graph completion (KGC), a task that aims at predicting missing links with existing information inside a knowledge graph (KG), has emerged as a popular research area in recent years. While many existing works have demonstrated effectiveness on a single knowledge graph KGC, limited effort has been devoted to exploring the potentially complementary nature of multiple KGs. In this work, we proposed a novel method called CA-MKGC (Conflict-aware Multilingual Knowledge Graph Completion) for multiple knowledge graph completion (MKGC), aiming to alleviate the sparseness of a single knowledge graph by leveraging information from other knowledge graphs. We designed an intra-KG graph convolutional network encoder that regards the seed alignments between KGs as edges for intra-KG message propagation to model all KGs in a unified model while also adopting an iterative mechanism to progressively incorporate newly predicted alignments along with the newly inferred facts into the learning process. Additionally, we employed an active learning mechanism and a greedy approximation to a semi-constrained optimization problem to focus on learning the structural prior knowledge that is difficult to learn in semantic space, limiting the propagation of error in the iterative training process. Experimental results on multilingual KG datasets demonstrated that our method achieved state-of-the-art results.