An efficient algorithm for generating transmission irregular trees
An efficient algorithm for generating transmission irregular trees
- Conference Article
- 10.1117/12.233244
- Feb 27, 1996
An optimized multi-level codebook searching algorithm (MCS) for vector quantization is presented in this paper. Although it belongs to the category of the fast nearest neighbor searching (FNNS) algorithms for vector quantization, the MCS algorithm is not a variation of any existing FNNS algorithms (such as k-d tree searching algorithm, partial-distance searching algorithm, triangle inequality searching algorithm...). A multi-level search theory has been introduced. The problem for the implementation of this theory has been solved by a specially defined irregular tree structure which can be built from a training set. This irregular tree structure is different from any tree structures used in TSVQ, prune tree VQ, quad tree VQ... Strictly speaking, it cannot be called tree structure since it allows one node has more than one set of parents, it is only a directed graph. This is the essential difference between MCS algorithm and other TSVQ algorithms which ensures its better performance. An efficient design procedure has been given to find the optimized irregular tree for practical source. The simulation results of applying MCS algorithm to image VQ show that this algorithm can reduce searching complexity to less than 3% of the exhaustive search vector quantization (ESVQ) (4096 codevectors and 16 dimension) while introducing negligible error (0.064 dB degradation from ESVQ). Simulation results also show that the searching complexity is close linearly increase with bitrate.
- Research Article
2
- 10.1023/a:1006241826565
- Jul 1, 2000
- Journal of Automated Reasoning
A given binary resolution proof, represented as a binary tree, is said to be i>minimal if the resolutions cannot be reordered to generate an irregular proof. Minimality extends Tseitin"s regularity restriction and still retains completeness. A linear-time algorithm is introduced to decide whether a given proof is minimal. This algorithm can be used by a deduction system that avoids redundancy by retaining only minimal proofs and thus lessens its reliance on subsumption, a more general but more expensive technique. Any irregular binary resolution tree is made strictly smaller by an operation called i>Surgery, which runs in time linear in the size of the tree. After surgery the result proved by the new tree is nonstrictly more general than the original result and has fewer violations of the regular restriction. Furthermore, any nonminimal tree can be made irregular in linear time by an operation called i>Splay. Thus a combination of splaying and surgery efficiently reduces a nonminimal tree to a minimal one. Finally, a close correspondence between clause trees, recently introduced by the authors, and binary resolution trees is established. In that sense this work provides the first linear-time algorithms that detect minimality and perform surgery on clause trees.
- Research Article
20
- 10.1145/3232850
- Feb 24, 2019
- ACM Transactions on Mathematical Software
Hierarchical matrices are space- and time-efficient representations of dense matrices that exploit the low-rank structure of matrix blocks at different levels of granularity. The hierarchically low-rank block partitioning produces representations that can be stored and operated on in near-linear complexity instead of the usual polynomial complexity of dense matrices. In this article, we present high-performance implementations of matrix vector multiplication and compression operations for the H 2 variant of hierarchical matrices on GPUs. The H 2 variant exploits, in addition to the hierarchical block partitioning, hierarchical bases for the block representations and results in a scheme that requires only O ( n ) storage and O ( n ) complexity for the mat-vec and compression kernels. These two operations are at the core of algebraic operations for hierarchical matrices, the mat-vec being a ubiquitous operation in numerical algorithms while compression/recompression represents a key building block for other algebraic operations, which require periodic recompression during execution. The difficulties in developing efficient GPU algorithms come primarily from the irregular tree data structures that underlie the hierarchical representations, and the key to performance is to recast the computations on flattened trees in ways that allow batched linear algebra operations to be performed. This requires marshaling the irregularly laid out data in a way that allows them to be used by the batched routines. Marshaling operations only involve pointer arithmetic with no data movement and as a result have minimal overhead. Our numerical results on covariance matrices from 2D and 3D problems from spatial statistics show the high efficiency our routines achieve over 550GB/s for the bandwidth-limited matrix-vector operation and over 850GFLOPS/s in sustained performance for the compression operation on the P100 Pascal GPU.
- Conference Article
4
- 10.1109/icassp.1991.150804
- Jan 1, 1991
The popular orthonormal decomposition techniques such as block transforms and filter banks for multiresolution signal representation are unified. Their comparative performance results based on signal energy compaction are presented for image and AR(1) sources. It is observed that the filter banks with computationally efficient filtering algorithms and irregular tree structures are potential competitors to the block transforms, particularly for image processing and coding applications.< <ETX xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">></ETX>
- Conference Article
1
- 10.1109/icip.1996.560619
- Sep 16, 1996
There are a series of very efficient sequential algorithms that generate irregular trees during the process of detecting shapes in images. These algorithms are based on the fast Hough transform and are used for solving the most complex stages of detection when the production of the parameters is uncoupled. However, the parallelization of these algorithms is complex, and the problem of load distribution is crucial. We present three parallel algorithms for solving this problem. One of the solutions employs static load balancing. The other two use dynamic balancing with two different control policies: distributed and centralized. These algorithms may also be used for solving other problems, such as the branch and bound, that generate irregular trees.
- Ask R Discovery
- Chat PDF
AI summaries and top papers from 250M+ research sources.