Algorithmic Aspects

  • Abstract
  • Literature Map
  • Similar Papers
Abstract
Translate article icon Translate Article Star icon

A summary is not available for this content so a preview has been provided. Please use the Get access link above for information on how to access this content.

Similar Papers
  • Research Article
  • Cite Count Icon 8
  • 10.1109/lcomm.2023.3262859
3.8-Gbps Polar Belief Propagation Decoder on GPU
  • May 1, 2023
  • IEEE Communications Letters
  • Yuxing Chen + 4 more

In this work, a high-throughput belief propagation (BP) decoder of polar codes on graphics processing unit (GPU) is proposed for software-defined communication systems. The decoder is jointly optimized from algorithm and architecture aspects. From the algorithm aspect, the storage pattern and computation flow are optimized to reduce complexity. From the architecture aspect, different granularities of parallelism are extensively exploited to achieve high throughput. Equipped with these techniques, a high-speed GPU-based BP decoder is developed, and experimental results show that the proposed decoder can improve the normalized throughput by 76.4% to 294.1% compared to the state-of-the-art GPU-based BP decoder.

  • Conference Article
  • Cite Count Icon 15
  • 10.1117/12.621403
A preliminary evaluation of 3D mesh animation coding techniques
  • Aug 18, 2005
  • Proceedings of SPIE, the International Society for Optical Engineering/Proceedings of SPIE
  • Khaled Mamou + 2 more

This paper provides an overview of the state-of-the-art techniques recently developed within the emerging field of dynamic mesh compression. Static encoders, wavelet-based schemes, PCA-based approaches, differential temporal and spatio-temporal predictive techniques, and clustering-based representations are considered, presented, analyzed, and objectively compared in terms of compression efficiency, algorithmic and computational aspects and offered functionalities (such as progressive transmission, scalable rendering, computational and algorithmic aspects, field of applicability...). The proposed comparative study reveals that: (1) clustering-based approaches offer the best compromise between compression performances and computational complexity; (2) PCA-based representations are highly efficient on long animated sequences (i.e. with number of mesh vertices much smaller than the number of frames) at the price of prohibitive computational complexity of the encoding process; (3) Spatio-temporal Dynapack predictors provides simple yet effective predictive schemes that outperforms simple predictors such as those considered within the interpolator compression node adopted by the MPEG-4 within the AFX standard; (4) Wavelet-based approaches, which provide the best compression performances for static meshes show here again good results, with the additional advantage of a fully progressive representation, but suffer from an applicability limited to large meshes with at least several thousands of vertices per connected component.

  • Conference Article
  • Cite Count Icon 66
  • 10.1145/2020408.2020573
Diversified ranking on large graphs
  • Aug 21, 2011
  • Hanghang Tong + 4 more

Diversified ranking on graphs is a fundamental mining task and has a variety of high-impact applications. There are two important open questions here. The first challenge is the measure - how to quantify the goodness of a given top-k ranking list that captures both the relevance and the diversity? The second challenge lies in the algorithmic aspect - how to find an optimal, or near-optimal, top-k ranking list that maximizes the measure we defined in a scalable way? In this paper, we address these challenges from an optimization point of view. Firstly, we propose a goodness measure for a given top-k ranking list. The proposed goodness measure intuitively captures both (a) the relevance between each individual node in the ranking list and the query; and (b) the diversity among different nodes in the ranking list. Moreover, we propose a scalable algorithm (linear wrt the size of the graph) that generates a provably near-optimal solution. The experimental evaluations on real graphs demonstrate its effectiveness and efficiency.

  • Conference Article
  • Cite Count Icon 6
  • 10.1109/icact.2007.358666
A Cost-Efficient WDM-PON Architecture Supporting Dynamic Wavelength and Time Slot Allocation
  • Feb 1, 2007
  • Hyowon Kim + 2 more

In this paper, we propose a cost-efficient WDM-PON architecture. In the proposed architecture, both system and algorithm aspects are addressed together. To reduce the cost of system, we introduce the concept of logical group, which groups some ONUs depending on logical group size. In the algorithm aspect, a dynamic bandwidth algorithm is designed to support the dynamism of WDM-PON system. Thus, bandwidth allocation algorithm can allocate wavelengths/time slots to a single ONUs up to the maximum that dynamism of WDM-PON system can allocate. The simulation study shows that our proposed architecture enhances dynamism on wavelength/time slot allocation in a cost-efficient manner.

  • Conference Article
  • Cite Count Icon 1
  • 10.1115/jrc2017-2223
Train Circulation Planning: Quantitative Approaches
  • Apr 4, 2017
  • Plínio Vilela + 3 more

The railway traffic system is an important player in passenger and freight transportation. This paper aims to present a survey of optimization models for the most commonly studied rail transportation problems related to train scheduling. We propose a classification of models and describe their characteristics by focusing on model structure and algorithmic aspects. Most reviewed papers have been proposed during the last decades. Apart from a few exceptions, the survey concentrates on published and easily accessible material. We have also elected to limit ourselves to contributions dealing specifically with rail transportation planning in single and double tracks. Each model has different goals, such as, to minimize service delays, to reduce the unscheduled train stops or to minimize the total time a train has to remain motionless, specially to allow crossings. For each group of problems, we propose a classification of models and describe their important characteristics by focusing on model structure and algorithmic aspects. The literature review involve papers published since the 1970s, but recent publications suggest that the problem is still heavily investigated. The main approaches considered are those that focus on Mathematical Optimization and Simulation. The review also considers the approach used to generate the solution, the type of railroad (real or hypothetical), and the infrastructure characteristics used to represent the railroad model. Our analysis focuses on showing an overview of those planning models.

  • Research Article
  • Cite Count Icon 19
  • 10.1109/tr.2019.2917752
Extend GO Methodology to Support Common-Cause Failures Modeling Explicitly by Means of Bayesian Networks
  • Jun 27, 2019
  • IEEE Transactions on Reliability
  • Tianyuan Ye + 4 more

As a success-oriented system reliability and safety-analysis technique, the GO methodology has been applied in a variety of real-world safety-critical industrial fields. Common-cause failure (CCF) is the simultaneous failure of multicomponents within a system due to the same root cause. An enhancement approach for the original GO methodology is proposed in this paper to support CCF modeling and calculation both in graphical modeling aspect and algorithm aspect. First, a new concise and formalized GO operator (named CCO) is introduced to represent complicated CCF event, which makes the CCF modeling process intuitive and concise for analysts. In algorithm aspect, the mapping rule is given and demonstrated to transform new CCO operator with impacted multiple operators to the corresponding Bayesian network (BN) fragment. Second, general mapping programmable process is presented on transforming any CCF enhanced GO model to the corresponding BN. Furthermore, using BN's inference capability, the enhanced GO model with CCF can be calculated efficiently. Nevertheless, the diagnosis process can be performed to investigate the weak points of the modeled system. Finally, a case study is performed to demonstrate the modeling process by means of CCF enhanced GO model. The calculation result shows that CCF has significant influence on the system reliability. Using diagnosis analysis, the CCF event can be confirmed as the major cause leading to system failure.

  • Book Chapter
  • 10.1007/3-540-47840-x_4
About Design and Efficiency of Distributed Programming: Some Algorithmic Aspects
  • Jan 1, 2002
  • Bernard Toursel

This paper summarizes a talk at the NATO ARW on distributed computing. It deals with some algorithmic aspects when programming with this paradigm, related to the lack of methodologies and tools useful to design distributed programs efficient, independent of the environment and able to automatically adapt to the evolutions of the program execution and of the platform characteristics.KeywordsGlobal InformationSynchronization PhaseSequential AlgorithmProgram ExecutionDistribute ProgramThese keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

  • Research Article
  • Cite Count Icon 2
  • 10.4108/eetismla.4094
Aspects-based representative significance of Machine Learning algorithms & natural language processing applications in nanotechnology.
  • Oct 25, 2024
  • EAI Endorsed Transactions on Intelligent Systems and Machine Learning Applications
  • Pascal Muam Mah

Introduction: The rapid changes in computational power of machine learning algorithms and natural language processing applications have led to multi-scale and many core designs in nanotechnology. Machine learning algorithms and natural language processing applications are easing the burden engineers have to go through to understand nanoparticles.Problem: There is still a challenge to predict and control particles of nanomaterials at nanoscale. Aspect-based climatic conditions are negatively impacting the world with huge modification on nanoparticles, nanomaterials and nanostructures.Objective: Study examines aspects of machine learning algorithms and natural language processing applications that can be used to predict and control particles, and structure of nanomaterials at nanoscale. Method and materials. The study examines significance of machine learning algorithms & applications in nanotechnology, examines aspects of machine learning algorithms & natural language processing applications applied in nanotechnology, and discusses current-future trends of nanotechnology based on learning algorithms & natural language processing applications.Results and conclusions. The findings result in the conclusion that machine learning & natural language processing application in nanotechnology is implementing an advanced microscopic revolution with the potential to metamorphose the world's industrialization and scale human existence. Machine learning algorithms have the potential to predict and classify nanomaterials and natural language processing has the potential to retrieve relevant data hidden within the classified nanomaterials which results has a huge significance in the pharmaceutical industry

  • Book Chapter
  • Cite Count Icon 22
  • 10.1007/978-3-540-69384-0_53
Hardware Implementation Aspects of New Low Complexity Image Coding Algorithm for Wireless Capsule Endoscopy
  • Jan 1, 2008
  • Paweł Turcza + 2 more

The paper presents hardware implementation aspects of new efficient image compression algorithm designed for wireless capsule endoscopy with Bayer color filter array (CFA). Since power limitation, small size conditions and specific image data format (CFA) exclude application of traditional image compression techniques dedicated ones are necessary. Discussed algorithm is based on integer version of discrete cosine transform (DCT). Therefore it has low complexity and power consumption. It is demonstrated that the perfor-mance of proposed algorithm is comparable to the performance of JPEG2000 --- very complex, sophisticated wavelet based coder. In the paper a VLSI coder architecture is proposed and power requirements are discussed.

  • Conference Article
  • Cite Count Icon 8
  • 10.1145/1090122.1090128
Analysis and tuning of subdivision algorithms
  • May 12, 2005
  • Georg Umlauf

This paper surveys the current state in analyzing and tuning of subdivision algorithms. These two aspects of subdivision algorithms are very much intertwined with the differential geometry of the subdivision surface. This paper deals with the interconnection of these different aspects of subdivision algorithms and surfaces.The principal idea for the analysis of a subdivision algorithm dates back to the late 70s although the overall technique is only well understood since the early 90s. Most subdivision algorithms are analyzed today but the proofs involve time consuming computations. Only recently simple proofs for a certain class of subdivision algorithms were developed that are based on geometric reasoning. This allows for easier smoothness proofs for new developed or tuned subdivision algorithms.The analysis of the classical algorithms, such Catmull-Clark, Loop, etc., shows that the subdivision surfaces at the extraordinary points are not as smooth as the rest of the surface. It was also shown that the subdivision surfaces of these classical algorithms cannot model certain basic shapes. One way to tune a stationary subdivision algorithms to overcome this problem is to drop the stationarity while at the same time using the smoothness proof of the stationary algorithms.

  • Book Chapter
  • 10.5772/5562
An Efficient Quasi-Human Heuristic Algorithm for Solving the Rectangle-Packing Problem
  • Sep 1, 2008
  • Wenqi Huang + 1 more

In this paper, an efficient quasi-human heuristic algorithm (QHA) for solving rectanglepacking problem is proposed. High area usage of the box can be obtained by this algorithm. Optimal solutions of 19 of 21 test instances taken from Hopper & Turton (2001) and 3 of 13 instances taken from Burke et al. (2004) are achieved by QHA. The experimental results demonstrate that QHA is rather efficient for solving the rectangle-packing problem. We guess the quasi-human approach will be fruitful for solving other NP-hard problems. The 13th instance N13 is generated on the basis of the 20th instance in Hopper & Turton (2001).

  • Conference Article
  • Cite Count Icon 2
  • 10.1109/cdc.1972.269091
Computational aspects of performance-adaptive self-organizing control algorithms
  • Dec 1, 1972
  • Robert Kitahara + 1 more

Sevaral algorithms appear scattered throughout the current literature which treat, in one form or another, problems which may be loosely categorized as belonging to the discipline of performance-adaptive self-organizing control. The need arises, therefore, to determine the usefulness of these algorithms to problems of interest and to ascertain in some way the advantages and shortcomings of the various available schemes. To this end the work presented here is part of an investigation made into the computational aspects of performance-adaptive self-organizing control algorithms. The investigation appeared in the form of a critical evaluation and comparison of the various methods relative to common qualitative and quantitative criteria. From the results of these evaluations and comparisons, conclusions were drawn concerning the relative merit of each scheme. The results of the application of several self-organizing schemes for one particular example, a non-linear heat treatment process, are presented here.

  • Conference Article
  • Cite Count Icon 8
  • 10.1109/csae.2012.6272555
Towards lightweight distributed applications for mobile cloud computing
  • May 1, 2012
  • Muhammad Shiraz + 2 more

The lightness of distributed application processing platform becomes more imperative with rapid proliferation of SMDs and ever increasing demand to utilize SMDs for intensive applications. Lightweight framework aspires for minimum possible overhead and optimal possible resources utilization on SMDs. Traditionally the distributed platform is established at runtime by outsourcing intensive applications partially or entirely to cloud datacenters. Such approaches implant intensive responsibilities of distributed platform establishment and management on SMDs for the entire duration of remote application processing. For that reason computing resources of SMDs are exploited copiously. In this paper we investigate the heavyweight aspects of current offloading algorithms in two different scenarios. First, we analyze the impact of VM deployment for application offloading in simulation environment using CloudSim. Second, we investigate the heavyweight aspects of current offloading algorithms by qualitative analysis. Finally, we propose a novel model for lightweight distributed application deployment in mobile cloud computing.

  • PDF Download Icon
  • Research Article
  • Cite Count Icon 5
  • 10.1088/1742-6596/459/1/012031
Implementation of perceptual aspects in a face recognition algorithm
  • Sep 6, 2013
  • Journal of Physics: Conference Series
  • F Crenna + 5 more

Automatic face recognition is a biometric technique particularly appreciated in security applications. In fact face recognition presents the opportunity to operate at a low invasive level without the collaboration of the subjects under tests, with face images gathered either from surveillance systems or from specific cameras located in strategic points. The automatic recognition algorithms perform a measurement, on the face images, of a set of specific characteristics of the subject and provide a recognition decision based on the measurement results. Unfortunately several quantities may influence the measurement of the face geometry such as its orientation, the lighting conditions, the expression and so on, affecting the recognition rate. On the other hand human recognition of face is a very robust process far less influenced by the surrounding conditions. For this reason it may be interesting to insert perceptual aspects in an automatic facial-based recognition algorithm to improve its robustness. This paper presents a first study in this direction investigating the correlation between the results of a perception experiment and the facial geometry, estimated by means of the position of a set of repere points.

  • Single Book
  • Cite Count Icon 497
  • 10.1201/9781351073493
Inductive Learning Algorithms for Complex Systems Modeling
  • Aug 8, 2019
  • Hema R Madala + 1 more

Introduction: Systems and Cybernetics. Inductive Learning Algorithms: Self-Organization Method. Network Structures. Long Term Quantitative Predictions. Dialogue Language Generalization. Noise Immunity and Convergence: Analogy with Information Theory. Classification and Analysis of Criteria. Improvement of Noise Immunity. Asymptotic Properties of Criteria. Balance Criterion of Predictions. Convergence of Algorithms. Physical Fields and Modeling: Finite-Difference Pattern Schemes. Comparative Studies. Cyclic Processes. Clusterization and Recognition: Self-Organization Modeling and Clustering. Methods of Self-Organization Clustering. Objective Computer Clustering Algorithm. Levels of Discretization and Balance Criterion. Forecasting Methods of Analogues. Applications: Fields of Application. Weather Modeling. Ecological System Studies. Modeling of Economical Systems. Agricultural System Studies. Modeling of Solar Activity. Inductive and Deductive Networks: Self-Organization Mechanism in the Networks. Network Techniques. Generalization. Comparison and Simulation Results. Basic Algorithms and Program Listings: Computational Aspects of Multilayered Algorithm. Computational Aspects of Combinatorial Algorithm. Computational Aspects of Harmonical Algorithm.

Save Icon
Up Arrow
Open/Close