Minimization of Submesh Boundary Errors In Dynamic Mesh Coding
The video-based dynamic mesh coding (V-DMC) standard is a cutting-edge technology for the compression of dynamic mesh data. V-DMC enables parallel encoding and partial decoding by introducing submesh frameworks in which dynamic meshes are separated and independently processed. However, V-DMC may raise submesh boundary errors like holes due to misaligning the existence or coordinates of vertices, degrading the objective and subjective qualities of decoded dynamic meshes. To minimize the boundary errors and improve the coding performance, we propose a two-stage boundary error correction method in V-DMC’s preprocessing and encoding/decoding stages. Specifically, the first stage rearranges the preprocessing order to minimize boundary errors, whereas the second stage fills holes based on boundary information. Experimental results show that the proposed method can minimize the boundary errors among the V-DMC decoded meshes, and thus significantly improve the objective and subjective quality compared to the V-DMC reference software.
- Research Article
41
- 10.1007/s00239-004-0314-2
- Oct 6, 2005
- Journal of Molecular Evolution
The canonical genetic code has been reported both to be error minimizing and to show stereochemical associations between coding triplets and binding sites. In order to test whether these two properties are unexpectedly overlapping, we generated 200,000 randomized genetic codes using each of five randomization schemes, with and without randomization of stop codons. Comparison of the code error (difference in polar requirement for single-nucleotide codon interchanges) with the coding triplet concentrations in RNA binding sites for eight amino acids shows that these properties are independent and uncorrelated. Thus, one is not the result of the other, and error minimization and triplet associations probably arose independently during the history of the genetic code. We explicitly show that prior fixation of a stereochemical core is consistent with an effective later minimization of error.
- Research Article
118
- 10.1007/pl00006356
- Jul 1, 1998
- Journal of Molecular Evolution
Distances between amino acids were derived from the polar requirement measure of amino acid polarity and Benner and co-workers' (1994) 74-100 PAM matrix. These distances were used to examine the average effects of amino acid substitutions due to single-base errors in the standard genetic code and equally degenerate randomized variants of the standard code. Second-position transitions conserved all distances on average, an order of magnitude more than did second-position transversions. In contrast, first-position transitions and transversions were about equally conservative. In comparison with randomized codes, second-position transitions in the standard code significantly conserved mean square differences in polar requirement and mean Benner matrix-based distances, but mean absolute value differences in polar requirement were not significantly conserved. The discrepancy suggests that these commonly used distance measures may be insufficient for strict hypothesis testing without more information. The translational consequences of single-base errors were then examined in different codon contexts, and similarities between these contexts explored with a hierarchical cluster analysis. In one cluster of codon contexts corresponding to the RNY and GNR codons, second-position transversions between C and G and transitions between C and U were most conservative of both polar requirement and the matrix-based distance. In another cluster of codon contexts, second-position transitions between A and G were most conservative. Despite the claims of previous authors to the contrary, it is shown theoretically that the standard code may have been shaped by position-invariant forces such as mutation and base content. These forces may have left heterogeneous signatures in the code because of differences in translational fidelity by codon position. A scenario for the origin of the code is presented wherein selection for error minimization could have occurred multiple times in disjoint parts of the code through a phyletic process of competition between lineages. This process permits error minimization without the disruption of previously useful messages, and does not predict that the code is optimally error-minimizing with respect to modern error. Instead, the code may be a record of genetic process and patterns of mutation before the radiation of modern organisms and organelles.
- Research Article
- 10.5204/mcj.2702
- Oct 1, 2007
- M/C Journal
Failure Notice
- Research Article
19
- 10.1049/ip-rsn:19951998
- Jan 1, 1995
- IEE Proceedings - Radar, Sonar and Navigation
The authors present the analysis of effects on global positioning system (GPS) observables using the functional modelling and simulation package of a digital baseband processor for the GPS receiver. Three issues are addressed: first, the static observable errors as functions of multipath parameters are derived mathematically in the absence of input noise; secondly, the dynamic code and carrier tracking errors as functions of time due to multipath in the presence of input noise are investigated; and finally the deterioration in the accuracy of the GPS observables due to antenna residual phase and antenna centre movement is studied with different receiver design parameters. It is shown that the functional modelling and simulation package provide an alternative approach to GPS system accuracy research where theoretical analysis and hardware methods are very difficult, inaccurate or prohibitively expensive.
- Conference Article
38
- 10.1109/euvip53989.2022.9922888
- Sep 11, 2022
This article presents a new compression scheme for 3D dynamic meshes, referred to as Video and Subdivision based Mesh Coding (VSMC). The VSMC approach combines a displaced subdivision surface model with video-based coding in order to achieve efficient compression performance and real-time, low-power decoding and playback. In addition, VSMC supports a rich set of functionalities including scalability (spatial, temporal, and quality) and progressive transmission. The proposed scheme [1] was shown to outperform the anchor for the MPEG Call for Proposals on Dynamic Mesh coding [2] and was recently selected by the ISO MPEG 3D Graphics Coding group as the basis for the upcoming Video-based Dynamic Mesh Coding standard.
- Research Article
15
- 10.1109/tit.1970.1054420
- Mar 1, 1970
- IEEE Transactions on Information Theory
A recent paper by Crimmins et al. deals with minimization of mean-square error for group codes by the use of Fourier transforms on groups. In this correspondence a method for representing the groups in a form suitable for machine calculation is shown. An efficient method for calculating the Fourier transform of a group is also proposed and its relationship to the fast Fourier transform is shown. For groups of characteristic two, the calculation requires only <tex xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">N \log_2 N</tex> additive operations where <tex xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">N</tex> is the order of the group.
- Conference Article
2
- 10.1145/2801694.2802141
- Jan 1, 2015
The security of hardware-software systems is at risk from a wide range of attack vectors that appear at various stages during the execution of machine code. The existing approaches for repairing software defects have numerous restrictions with respect to their applicability and functionality and to the range of vulnerabilities that can be identified and eliminated.We propose an approach for removing software errors in program code that is based on just-in-time compilation in a virtual execution environment. The virtual environment uses static, dynamic, and hybrid analyses of the intermediate representation of vulnerable code and re-compiles such code to be safe.The language of code annotations allows us to manage static and dynamic analyses and code transformations. We can change the level of analysis and the amount of time spent on such analyses by dynamically adapting the precision.
- Conference Article
10
- 10.1109/euvip53989.2022.9922839
- Sep 11, 2022
ISO/IEC JTC1 SC29, also called MPEG, has been working on a compression standard for dynamic meshes since couple of year now and it has released a Call for Proposals (CfP) for Dynamic Mesh Coding in October 2021. One of the goals of the future standard is to utilize the Visual Volumetric Video-based Coding (V3C) framework, defined in ISO/IEC 23090-5, that is already used for dynamic point cloud compression and volumetric video. In this paper, the authors described their vision of how dynamic mesh compression could be achieved, which corresponds to their technical response to the CfP. The presented objective and subjective results will show that the proposed solution outperforms the anchor in terms of objective metrics and subjective perceived visual quality for low bit rate use cases.
- Discussion
10
- 10.1007/s00239-018-9880-6
- Jan 1, 2019
- Journal of Molecular Evolution
In a recent Letter, Di Giulio questions the use of the term 'neutral' when describing the process by which error minimization may have arisen as a side-product of genetic code expansion, resulting from the addition of similar amino acids to similar codons(Di Giulio, in J Mol Evol 86(9):593-597, 2018). However, I point out that in this scenario error minimization is non-adaptive, and so 'neutral' is an appropriate term to describe its imperviousness to direct selection. Error minimization is a form of mutational robustness, and so commonly viewed as beneficial. This in turn implies that not all beneficial traits may be adaptations generated by direct selection for that trait.
- Research Article
- 10.71097/ijsat.v16.i2.6742
- Jun 15, 2025
- International Journal on Science and Technology
The food and beverage industry has rapidly shifted towards digitization, driven by changing customer expectations, mobile technology, and the impact of the COVID-19 pandemic. Addressing the need for seamless, contactless, and efficient ordering, this project presents Quick Cart, a full-stack web application that transforms the traditional ordering process using QR code technology, real- time menu management, and secure digital payments. Quick Cart bridges the gap between customers and store owners, offering an interactive platform for managing the entire order lifecycle—from menu browsing to payment. Customers scan a QR code placed on their table to access the digital menu, customize orders, and complete transactions without staff involvement. Store owners can register businesses, manage menus, and track orders through a user-friendly dashboard. Each business is provided a unique QR code linking to its digital menu. Built with Node.js and Express.js on the backend, MongoDB for database management, and React.js on the frontend, the system ensures a scalable, responsive, and dynamic user experience. Axios is used for real-time API communication. Payment integration is achieved through UPI and Razorpay, enabling secure and efficient digital transactions, with real-time payment status updates for store owners. Designed for flexibility and scalability, Quick Cart allows independent management of multiple businesses without data conflicts. Secure authentication ensures data protection. The system supports mobile responsiveness, dynamic QR code generation, error handling, and robust API communication, delivering reliability even on low-end devices.
- Conference Article
- 10.1117/12.630378
- Nov 9, 2005
- Proceedings of SPIE, the International Society for Optical Engineering/Proceedings of SPIE
This paper proposes a novel multiple-image coding technique using Ray-Space interpolation. Ray-Space, an image-based rendering technique to generate arbitrary views from multiple cameras, describes three- dimensional space based on only ray information from a large number of cameras. Therefore, data compression is needed. We leverage the correlation of time and space aiming for high compression. H.264/AVC is employed for dynamic image coding, and studies have been conducted on using the AVC in time domain. Here we propose a novel algorithm that uses view-interpolation for coding in space domain. Interpolation is a method to generate the middle view in a stereoscopic setup. By generating interpolated images from coded images as reference ones, coding performance should give better results. Therefore, interpolation accuracy is important for coding performance. In this paper, we propose an interpolation technique using geometric information in a linear camera arrangement. By calculating the trace of each point considering camera arrangement, and obtaining its corresponding point, the middle image is generated. In so doing, the interpolation method is an intensity-based scheme, constrained by smoothness in disparity domain. Experiment of coding using interpolation outperforms the standard AVC by 1~2 dB in all bitrates. Moreover, we deal with occlusion regions by means of extrapolation using four images. To detect occlusion regions, we use two criteria, one is minimum error, second is ratio of minimum error between four images. In occlusion region, the intensity of middle image is generated using extrapolated images. This method gives up to 1~3 dB improvement compared to occlusion-ignored algorithm.
- Conference Article
17
- 10.1145/1453101.1453135
- Nov 9, 2008
Programs trusted with secure information should not release that information in ways contrary to system policy. However, when a program contains an illegal flow of information, current information-flow reporting techniques are inadequate for determining the cause of the error. Reasoning about information-flow errors can be difficult, as the flows involved can be quite subtle. We present a general model for information-flow blame that can explain the source of such security errors in code. This model is implemented by changing the information-flow verification procedure to: (1) generate supplementary information to reveal otherwise hidden program dependencies; (2) modify the constraint solver to construct a blame dependency graph; and (3) develop an explanation procedure that returns a complete and minimal error report. Our experiments show that information-flow errors can generally be explained and resolved by viewing only a small fraction of the total code.
- Research Article
5
- 10.1142/s0219691308002690
- Nov 1, 2008
- International Journal of Wavelets, Multiresolution and Information Processing
Recently, several wavelet-based algorithms have been proposed for feature extraction in non-stationary signals such as ECG. These methods, however, have mainly used general purpose (unmatched) wavelet bases such as Daubechies and Quadratic Spline. In this paper, five new matched wavelet bases, with minimum approximation error and maximum coding gain criteria, are designed and applied to ECG signal analysis. To study the effect of using different wavelet bases for this application, two different wavelet-based R peak detection algorithms are implemented: (1) a conventional wavelet-based method; and (2) a modified wavelet-based R peak detection algorithm. Both algorithms are evaluated using the MIT-BIH Arrhythmia database. Experimental results show lower computational complexity (up to 76%) of the proposed R peak detection method compared to the conventional method. They also show considerable decrease in the number of failed detections (up to 55%) for both the conventional and the proposed algorithms when using matched wavelets instead of Quadratic Spline wavelet which, according to the literature, has generated the best detection results among all conventional wavelet bases studied previously for ECG signal analysis.
- Research Article
8
- 10.1088/1361-6501/ac4432
- Jan 20, 2022
- Measurement Science and Technology
The multipath effect causes severe degradation in the positioning of commercial GPS receivers. Due to multipath error, the positioning accuracy could reach a few of 10 m. If the cumulative Multipath delay is less than 0.1–0.35 chips, then it is difficult to mitigate in GPS receivers. This causes severe degradation in GPS signals and can cause a measurement bias. To alleviate this problem, the estimation of multipath parameters using annihilating filter and its mitigation in the GPS tracking loop is proposed in this work. The estimation of randomly generated multipath signals can be performed in the receiver with a lower sampling rate when compared to the larger bandwidth of the GPS baseband signal. Here, the frequency components of the multipath signal in superimposed complex exponentials have been transformed from the time delay and the amplitude of the path observables. The Rayleigh fading model in the urban scenario has been simulated in which the amplitude and the phase of the number of paths (i.e. the frequency component of superimposed complex exponentials) are set and this fading signal is convolved with GPS signal that forms the multipath faded signal. In the GPS receiver post-processing stage, with the help of the annihilation filter, the multipath components are estimated, then an inverse/adaptive filter and compensation technique are further applied to mitigate the multipath component. The mean square error with the different number of paths with noisy environments is analyzed utilizing the cadzow denoising algorithm. The simulation results of the proposed technique employed in the tracking module of the software GPS receiver under severe multipath conditions indicate a substantial enhancement in the performance of the GPS receiver with minimal code and carrier phase error when compared to the least squares and adaptive blind equalization channel techniques. Moreover, the positioning accuracy is also calculated with the inclusion of multipath components in two satellites out of six satellites used in the simulation, the results showed that the annihilation filter improved the mean position accuracy up to 9.3023 m.
- Research Article
3
- 10.1007/s11042-020-09098-9
- Jun 9, 2020
- Multimedia Tools and Applications
A high capacity partial reversible data hiding (PRDH) is introduced in this paper. First of all, an original image is converted to a cover image by the proposed image transformation algorithm. The image transformation algorithm adopts (7,4) Hamming code and minimal pairwise square error to ensure that the generated cover image is an almost distortion-free original image. The secret bits are embedded into the cover image by flipping and modifying the cover bits with respect to the syndrome generated by Hamming code. When the secret bits are extracted from the stego image, it can be transformed back to a cover image by the error-correcting ability provided by Hamming code. And this is the so-called partial reversible property. The visual performance and embedding capacity of the proposed method are theoretical analyzed. According to the experimental and theoretical results, high embedding capacity with acceptable visual performance is achieved by the proposed method. More specifically, the embedding rate is 10.5 times of Jana et al.’s method and Yang et al.’s proposed PRDH, and 3.5 times of Yang et al. ’s modified PRDH.