Fast, Minimum Storage Ray-Triangle Intersection

  • Abstract
  • Literature Map
  • Similar Papers
Abstract
Translate article icon Translate Article Star icon
Take notes icon Take Notes

Abstract We present a clean algorithm for determining whether a ray intersects a triangle. The algorithm translates the origin of the ray and then changes the base to yield a vector (t u v) T , where t is the distance to the plane in which the triangle lies and (u, v) represents the coordinates inside the triangle. One advantage of this method is that the plane equation need not be computed on the fly nor be stored, which can amount to significant memory savings for triangle meshes. As we found our method to be comparable in speed to previous methods, we believe it is the fastest ray-triangle intersection routine for triangles that do not have precomputed plane equations.

Similar Papers
  • Conference Article
  • Cite Count Icon 177
  • 10.1145/1198555.1198746
Fast, minimum storage ray/triangle intersection
  • Jan 1, 2005
  • Tomas Möller + 1 more

We present a clean algorithm for determining whether a ray intersects a triangle. The algorithm translates the origin of the ray and then changes the base of that vector which yields a vector (t u v)T, where t is the distance to the plane in which the triangle lies and (u, v) represents the coordinates inside the triangle.One advantage of this method is that the plane equation need not be computed on the fly nor be stored, which can amount to significant memory savings for triangle meshes. As we found our method to be comparable in speed to previous methods, we believe it is the fastest ray/triangle intersection routine for triangles which do not have precomputed plane equations.

  • Research Article
  • Cite Count Icon 4
  • 10.1049/rsn2.12356
Three‐dimensional point cloud reconstruction of inverse synthetic aperture radar image sequences based on back projection and iterative closest point fusion
  • Dec 8, 2022
  • IET Radar, Sonar & Navigation
  • Yu Wang + 4 more

In order to recover the three‐dimensional (3D) structure of the target from sequential inverse synthetic aperture radar (ISAR) images, the factorisation method is generally used. It requires a large number of high‐quality matched feature points from different ISAR images. If the number of extracted feature points is insufficient, the restored 3D structure is not obvious. Furthermore, the mismatching of feature points will greatly affect the quality of target reconstruction. However, the factorisation method only uses the information from the ISAR images, while that from imaging geometry is not sufficiently considered. ISAR imaging is a kind of projection, and the projection plane information could be taken into account for the 3D reconstruction. Hence, a new 3D reconstruction method for stable targets from sequential ISAR images is proposed in this paper. Firstly, the ISAR images are preprocessed with the CLEAN algorithm. The maximum between‐cluster variance method is applied to extract feature points from the processed images. Moreover, the image projection plane and projection equation corresponding to different ISAR images are analysed by using imaging geometry information. According to the projection equation, the feature points are back‐projected (BP) to the 3D space. Finally, the 3D point clouds obtained by the BP from multiple radar images are fused by the iterative closest point algorithm to restore the 3D structure of the target. The simulation and experiment results show the effectiveness and robustness of this method.

  • Conference Article
  • 10.1109/ncc.2018.8599975
Optimal rate control in a quasi-static wireless fading channel with throughput and power constraints
  • Feb 1, 2018
  • Rahul R + 1 more

We propose a novel recursive algorithm for determining the optimal admission and transmission rates for an M/M/1 transmitter buffer for obtaining minimum average queue length, under quasi-static fading, while satisfying a throughput constraint with given available transmit power. The optimal rate setting policy is obtained with significant savings in memory and computational complexity, and is simple and easy to implement.

  • Research Article
  • Cite Count Icon 4
  • 10.1109/tap.2021.3090504
Adaptive Multilevel Nonuniform Grid Algorithm for the Accelerated Analysis of Composite Metallic–Dielectric Radomes
  • Dec 1, 2021
  • IEEE Transactions on Antennas and Propagation
  • Yair Hollander + 1 more

An adaptive multilevel nonuniform grid (MLNG) algorithm is developed for the accelerated computation of fields radiated through composite metallic–dielectric radomes as well as antenna-radome interactions. The MLNG approach is applied to the mixed potential formulation of the coupled surface and volume electric field integral equations. The radome is decomposed into a hierarchy of subdomains (SDs) by an adaptive algorithm that closely follows the radome geometry, allowing significant savings in memory and CPU time. In the MLNG algorithm, only local generalized impedance matrices of the finest level SDs are evaluated. Far-zone potentials and fields are indirectly evaluated through a multilevel aggregation involving phase-and amplitude-compensated interpolation on nonuniform grids (NGs), requiring considerably fewer calculations as compared with the classical method of moments (MoM). The MLNG algorithm is incorporated in a preconditioned iterative solver. Accuracy as well as memory consumption and computation times of the algorithm are studied on realistic examples. The radome effect on the antenna input impedance and electric current density distribution is demonstrated. The method is validated by comparison with a commercial MoM software and shown to exhibit a computational complexity (CC) of O(NlogN), N being the number of unknowns.

  • Research Article
  • Cite Count Icon 2
  • 10.1016/j.imavis.2006.07.003
Memory-efficient spatial prediction image compression scheme
  • Aug 30, 2006
  • Image and Vision Computing
  • Anil V Nandi + 2 more

Memory-efficient spatial prediction image compression scheme

  • Conference Article
  • Cite Count Icon 1
  • 10.1117/12.584430
A multiresolution halftoning algorithm for progressive display
  • Jan 17, 2005
  • Mithun Mukherjee + 1 more

We describe and implement an algorithmic framework for memory efficient, ‘on-the-fly’ halftoning in a progressive transmission environment. Instead of a conventional approach which repeatedly recalls the continuous tone image from memory and subsequently halftones it for display, the proposed method achieves significant memory efficiency by storing only the halftoned image and updating it in response to additional information received through progressive transmission. Thus the method requires only a single frame-buffer of bits for storage of the displayed binary image and no additional storage is required for the contone data. The additional image data received through progressive transmission is accommodated through in-place updates of the buffer. The method is thus particularly advantageous for high resolution bi-level displays where it can result in significant savings in memory. The proposed framework is implemented using a suitable multi-resolution, multi-level modification of error diffusion that is motivated by the presence of a single binary frame-buffer. Aggregates of individual display bits constitute the multiple output levels at a given resolution. This creates a natural progression of increasing resolution with decreasing bit-depth.

  • Research Article
  • Cite Count Icon 9
  • 10.1109/tmag.2009.2012607
Circulant Adaptive Integral Method (CAIM) for Electromagnetic Scattering From Large Targets of Arbitrary Shape
  • Mar 1, 2009
  • IEEE Transactions on Magnetics
  • Constantina C Ioannidi + 1 more

A novel version of the 2-D adaptive integral method (AIM), called circulant AIM, is presented. The method is suited particularly to cylindrical structures of a quasicircular cross section, such as the wall of a jet engine inlet. Unlike in standard AIM, the auxiliary grid, where the scatterer is embedded, is no longer Cartesian, but polar/cylindrical, resembling a spider's web. In this way, a much lower number of auxiliary unknowns are required, since only delta sources sufficiently close to the outer surface are utilized. Apart from significant savings in memory, the main advantage of this geometry is that the resulting Green's function matrix is not merely Toeplitz, but also circulant, leading to enhanced efficiency of the technique.

  • Research Article
  • Cite Count Icon 39
  • 10.3390/electronics9091432
Neural Network for Low-Memory IoT Devices and MNIST Image Recognition Using Kernels Based on Logistic Map
  • Sep 2, 2020
  • Electronics
  • Andrei Velichko

This study presents a neural network which uses filters based on logistic mapping (LogNNet). LogNNet has a feedforward network structure, but possesses the properties of reservoir neural networks. The input weight matrix, set by a recurrent logistic mapping, forms the kernels that transform the input space to the higher-dimensional feature space. The most effective recognition of a handwritten digit from MNIST-10 occurs under chaotic behavior of the logistic map. The correlation of classification accuracy with the value of the Lyapunov exponent was obtained. An advantage of LogNNet implementation on IoT devices is the significant savings in memory used. At the same time, LogNNet has a simple algorithm and performance indicators comparable to those of the best resource-efficient algorithms available at the moment. The presented network architecture uses an array of weights with a total memory size from 1 to 29 kB and achieves a classification accuracy of 80.3–96.3%. Memory is saved due to the processor, which sequentially calculates the required weight coefficients during the network operation using the analytical equation of the logistic mapping. The proposed neural network can be used in implementations of artificial intelligence based on constrained devices with limited memory, which are integral blocks for creating ambient intelligence in modern IoT environments. From a research perspective, LogNNet can contribute to the understanding of the fundamental issues of the influence of chaos on the behavior of reservoir-type neural networks.

  • Research Article
  • Cite Count Icon 20
  • 10.1108/09615530810853655
An evolutionary‐based inverse approach for the identification of non‐linear heat generation rates in living tissues using a localized meshless method
  • May 22, 2008
  • International Journal of Numerical Methods for Heat & Fluid Flow
  • Kevin Erhart + 3 more

PurposeThis paper aims to develop and describe an improved process for determining the rate of heat generation in living tissue.Design/methodology/approachPrevious work by the authors on solving the bioheat equation has been updated to include a new localized meshless method which will create a more robust and computationally efficient technique. Inclusion of this technique will allow for the solution of more complex and realistic geometries, which are typical of living tissue. Additionally, the unknown heat generation rates are found through genetic algorithm optimization.FindingsThe localized technique showed superior accuracy and significant savings in memory and processor time. The computational efficiency of the newly proposed meshless solver allows the optimization process to be carried to a higher level, leading to more accurate solutions for the inverse technique. Several example cases are presented to demonstrate these conclusions.Research limitations/implicationsThis work includes only 2D development of the approach, while any realistic modeling for patient‐specific cases would be inherently 3D. The extension to 3D, as well as studies to improve the technique by decreasing the sensitivity to measurement noise and to incorporate non‐invasive measurement positioning, are under way.Practical implicationsAs medical imaging continuously improves, such techniques may prove useful in patient diagonosis, as heat generation can be correlated to the presence of tumors, infections, or other conditions.Originality/valueThis paper describes a new application of meshless methods. Such methods are becoming attractive due to their decreased pre‐processing requirements, especially for problems involving complex geometries (such as patient specific tissues), as well as optimization problems, where geometries may be constantly changing.

  • Book Chapter
  • Cite Count Icon 4
  • 10.1007/978-3-319-63754-9_31
Image Reconstruction Using Novel Two-Dimensional Fourier Transform
  • Oct 15, 2017
  • S Kala + 3 more

Reconstruction of a signal from its subset is used in various contexts in the field of signal processing. Image reconstruction is one such example which finds widespread application in face recognition, medical imaging, computer vision etc. Image reconstruction is computationally complex, and efficient implementations need to exploit the parallelism inherent in this operation. Discrete Fourier Transform (DFT) is a widely used technique for image reconstruction. Fast Fourier Transform (FFT) algorithms are used to compute DFTs efficiently. In this paper we propose a novel two dimensional (2D) Fast Fourier Transform technique for efficient reconstruction of a 2D image. The algorithm first applies 1D FFT based on radix-\(4^n\) along the rows of the image followed by same FFT operation along columns, to obtain a 2D FFT. Radix-\(4^n\) technique used here provides significant savings in memory required in the intermediate stages and considerable improvement in latency. The proposed FFT algorithm can be easily extended to three dimensional and higher dimensional FFTs. Simulated results for image reconstruction based on this technique are presented in the paper. 64 point FFT based on radix-\(4^3\) has been implemented using 130nm CMOS technology and operates at a maximum clock frequency of 350 MHz.

  • Research Article
  • Cite Count Icon 41
  • 10.1109/78.539028
Satisficing search algorithms for selecting near-best bases in adaptive tree-structured wavelet transforms
  • Jan 1, 1996
  • IEEE Transactions on Signal Processing
  • C Taswell

Satisficing search algorithms are proposed for adaptively selecting near-best basis and near-best frame decompositions in redundant tree-structured wavelet transforms. Any of a variety of additive or nonadditive information cost functions can be used as the decision criterion for comparing and selecting nodes when searching through the tree. The algorithms are applicable to tree-structured transforms generated by any kind of wavelet whether orthogonal, biorthogonal, or nonorthogonal. These satisficing search algorithms implement suboptimizing rather than optimizing principles, and acquire the important advantage of reduced computational complexity with significant savings in memory, flops, and time. Despite the suboptimal approach, top-down tree-search algorithms with additive or nonadditive costs that yield near-best bases can be considered, in certain important and practical situations, better than bottom-up tree-search algorithms with additive costs that yield best bases. Here, "better than" means that, effectively, the same level of performance can be attained for a relative fraction of the computational work. Experimental results comparing the various information cost functions and basis selection methods are demonstrated for both data compression of real speech and time-frequency analysis of artificial transients.

  • Research Article
  • Cite Count Icon 1
  • 10.1007/s11432-006-2015-5
Hybrid algorithm for accelerating the double series of Floquet vector modes
  • Oct 1, 2006
  • Science in China Series F: Information Sciences
  • Weidong Li + 3 more

In this paper, a hybrid algorithm for accelerating the double series of Floquet vector modes arising in the analysis of frequency selective surfaces (FSS) is presented. The asymptotic terms with slow convergence in the double series are first accelerated by Poisson transformation and Ewald method, and then the remained series is accelerated by Shank transformation. It results in significant savings in memory and computing time. Numerical examples verify the validity of the hybrid acceleration algorithm.

  • Research Article
  • Cite Count Icon 4
  • 10.2514/1.44251
Fast Large-Eddy Simulation of Low Reynolds Number Flows over a NACA0025
  • Jan 1, 2010
  • Journal of Aircraft
  • Tao Xu + 2 more

Fast Large-Eddy Simulation of Low Reynolds Number Flows over a NACA0025

  • Research Article
  • Cite Count Icon 42
  • 10.1109/tmi.2009.2021615
PenMesh—Monte Carlo Radiation Transport Simulation in a Triangle Mesh Geometry
  • Dec 1, 2009
  • IEEE Transactions on Medical Imaging
  • A Badal + 4 more

We have developed a general-purpose Monte Carlo simulation code, called penMesh, that combines the accuracy of the radiation transport physics subroutines from PENELOPE and the flexibility of a geometry based on triangle meshes. While the geometric models implemented in most general-purpose codes--such as PENELOPE's quadric geometry--impose some limitations in the shape of the objects that can be simulated, triangle meshes can be used to describe any free-form (arbitrary) object. Triangle meshes are extensively used in computer-aided design and computer graphics. We took advantage of the sophisticated tools already developed in these fields, such as an octree structure and an efficient ray-triangle intersection algorithm, to significantly accelerate the triangle mesh ray-tracing. A detailed description of the new simulation code and its ray-tracing algorithm is provided in this paper. Furthermore, we show how it can be readily used in medical imaging applications thanks to the detailed anatomical phantoms already available. In particular, we present a whole body radiography simulation using a triangulated version of the anthropomorphic NCAT phantom. An example simulation of scatter fraction measurements using a standardized abdomen and lumbar spine phantom, and a benchmark of the triangle mesh and quadric geometries in the ray-tracing of a mathematical breast model, are also presented to show some of the capabilities of penMesh.

  • Conference Article
  • Cite Count Icon 11
  • 10.1145/582034.582081
Solution of a three-body problem in quantum mechanics using sparse linear algebra on parallel computers
  • Nov 10, 2001
  • Mark Baertschy + 1 more

A complete description of two outgoing electrons following an ionizing collision between a single electron and an atom or molecule has long stood as one of the unsolved fundamental problems in quantum collision theory. In this paper we describe our use of distributed memory parallel computers to calculate a fully converged wave function describing the electron-impact ionization of hydrogen. Our approach hinges on a transformation of the Schrodinger equation that simplifies the boundary conditions but requires solving very ill-conditioned systems of a few million complex, sparse linear equations. We developed a two-level iterative algorithm that requires repeated solution of sets of a few hundred thousand linear equations. These are solved directly by LU-factorization using a specially tuned, distributed memory parallel version of the sparse LU-factorization library Super-LU. In smaller cases, where direct solution is technically possible, our iterative algorithm still gives significant savings in time and memory despite lower megaflop rates.

Save Icon
Up Arrow
Open/Close
  • Ask R Discovery Star icon
  • Chat PDF Star icon

AI summaries and top papers from 250M+ research sources.

Search IconWhat is the difference between bacteria and viruses?
Open In New Tab Icon
Search IconWhat is the function of the immune system?
Open In New Tab Icon
Search IconCan diabetes be passed down from one generation to the next?
Open In New Tab Icon