A novel frequency-domain approach to the range migration algorithm for efficient medical image processing: Application in tumor detection and identification

  • Abstract
  • Literature Map
  • Similar Papers
Abstract
Translate article icon Translate Article Star icon
Take notes icon Take Notes

A novel frequency-domain approach to the range migration algorithm for efficient medical image processing: Application in tumor detection and identification

Similar Papers
  • Research Article
  • Cite Count Icon 30
  • 10.1155/2021/4196241
Osteolysis: A Literature Review of Basic Science and Potential Computer-Based Image Processing Detection Methods.
  • Jan 1, 2021
  • Computational intelligence and neuroscience
  • Soroush Baseri Saadi + 7 more

Osteolysis is one of the most prominent reasons of revision surgeries in total joint arthroplasty. This biological phenomenon is induced by wear particles and corrosion products that stimulate inflammatory biological response of surrounding tissues. The eventual responses of osteolysis are the activation of macrophages leading to bone resorption and prosthesis failure. Various factors are involved in the initiation of osteolysis from biological issues, design, material specifications, and model of the prosthesis to the health condition of the patient. Nevertheless, the factors leading to osteolysis are sometimes preventable. Changes in implant design and polyethylene manufacturing are striving to improve overall wear. Osteolysis is clinically asymptomatic and can be diagnosed and analyzed during follow-up sessions through various imaging modalities and methods, such as serial radiographic, CT scan, MRI, and image processing-based methods, especially with the use of artificial neural network algorithms. Deep learning algorithms with a variety of neural network structures such as CNN, U-Net, and Seg-UNet have proved to be efficient algorithms for medical image processing specifically in the field of orthopedics for the detection and segmentation of tumors. These deep learning algorithms can effectively detect and analyze osteolytic lesions well in advance during follow-up sessions in order to administer proper treatments before reaching a critical point. Osteolysis can be treated surgically or nonsurgically with medications. However, revision surgeries are the only solution for the progressive osteolysis. In this literature review, the underlying causes, mechanisms, and treatments of osteolysis are discussed with the main focus on the possible computer-based methods and algorithms that can be effectively employed for the detection of osteolysis.

  • Research Article
  • Cite Count Icon 4
  • 10.15287/afr.2018.1282
Forest inventory sensitivity to UAS-based image processing algorithms
  • Jul 30, 2019
  • Annals of Forest Research
  • Bonifasius Maturbongs + 3 more

Frequent and accurate estimation of forest structure parameters, such as number of trees per hectare or total height, are mandatory for sustainable forest management. Unmanned aircraft system (UAS) equipped with inexpensive sensors can be used to monitor and measure forest structure. The detailed information provided by the UAS allows tree level forest inventory. However, tree identification depends on a variety of parameters defining the image processing and tree segmentation algorithms. The objective of our study was to identify parameter combinations that accurately delineated trees and their heights. We evaluated the impact of different tree segmentation and point cloud generation algorithms on forest inventory from imagery collected with a UAS over a mature Douglas-fir plantation forest. We processed the images with two commonly used commercial software packages, Agisoft PhotoScan and Pix4Dmapper, both implementing image processing algorithms called Structure from Motion. For each software we generated photogrammetric point clouds by varying the parameters defining the implementation. We segmented individual trees and heights using three tree algorithms: Variable Window Filter, Graph-Theoretical, and Watershed Segmentation. We assessed the impact of image processing algorithms on forest inventory by comparing the estimated trees with trees manually identified from the point clouds. We found that the type of tree segmentation and image processing algorithms have a significant effect in accurately identifying trees. For tree height estimation, we found strong evidence that image processing algorithms had significant effects, whereas tree segmentation algorithms did not significantly affect tree height estimation.These findings may be of interest to others that are using high-resolution spatial imagery to estimate forest inventory parameters.

  • Research Article
  • 10.1515/jisys-2023-0245
High-resolution image processing and entity recognition algorithm based on artificial intelligence
  • Dec 7, 2024
  • Journal of Intelligent Systems
  • Yutong Sun

Objective With the popularity of high-resolution devices such as high-definition, ultra-high-definition televisions, and smartphones, the demand for high-resolution images is also increasing, which puts forward higher requirements for high-resolution image processing and entity recognition technology. Method This article introduced the research progress and application of high-resolution image processing and entity recognition algorithms from the perspective of artificial intelligence (AI). First, the important role of AI in high-resolution image processing and entity recognition was introduced, and then the applications of deep learning-based algorithms in high-resolution image grayscale equalization, denoising, and deblurring were introduced. Subsequently, the application of AI-based object detection and image segmentation algorithms in entity recognition was explored, and the superiority of AI-based high-resolution image processing and entity recognition algorithms was verified through training and testing. The accuracy of the model was verified through testing experiments. Finally, a summary and outlook were made on high-resolution image processing and entity recognition algorithms based on AI. Result After experimental testing, it was found that high-resolution image processing and entity recognition based on AI had higher efficiency, and the overall image recognition ability was improved by 29.6% compared to traditional image recognition models. The recognition speed and accuracy were also improved. Conclusion High-resolution image processing and element recognition algorithms based on AI enabled observers to see the detailed information in the image more clearly, thus improving the efficiency and accuracy of image analysis. Through continuous improvement of algorithm performance, real-time application, and expansion of cross-disciplinary applications, people can look forward to the development of more advanced and powerful image processing and entity recognition technologies, which will bring huge impetus to research and application in various fields.

  • Book Chapter
  • Cite Count Icon 7
  • 10.1016/b978-0-12-384988-5.00046-2
Chapter 46 - Medical Image Processing Using GPU-Accelerated ITK Image Filters
  • Jan 1, 2011
  • GPU Computing Gems Emerald Edition
  • Won-Ki Jeong + 2 more

Chapter 46 - Medical Image Processing Using GPU-Accelerated ITK Image Filters

  • Research Article
  • Cite Count Icon 6
  • 10.3390/math10132361
Efficient Algorithms for Data Processing under Type-3 (and Higher) Fuzzy Uncertainty
  • Jul 5, 2022
  • Mathematics
  • Vladik Kreinovich + 3 more

It is known that, to more adequately describe expert knowledge, it is necessary to go from the traditional (type-1) fuzzy techniques to higher-order ones: type-2, probably type-3 and even higher. Until recently, only type-1 and type-2 fuzzy sets were used in practical applications. However, lately, it turned out that type-3 fuzzy sets are also useful in some applications. Because of this practical importance, it is necessary to design efficient algorithms for data processing under such type-3 (and higher-order) fuzzy uncertainty. In this paper, we show how we can combine known efficient algorithms for processing type-1 and type-2 uncertainty to come up with a new algorithm for the type-3 case.

  • Research Article
  • Cite Count Icon 50
  • 10.1016/j.amc.2020.125046
A comparison between the sampling Kantorovich algorithm for digital image processing with some interpolation and quasi-interpolation methods
  • Jan 25, 2020
  • Applied Mathematics and Computation
  • Danilo Costarelli + 2 more

A comparison between the sampling Kantorovich algorithm for digital image processing with some interpolation and quasi-interpolation methods

  • Dissertation
  • 10.17077/etd.thro601t
A wavelet-based framework for efficient processing of digital imagery with an application to helmet-mounted vision systems
  • Nov 19, 2018
  • Andrew Kusiak + 5 more

<p>Image acquisition devices, as well as image processing theory, algorithms, and hardware have advanced to the point that low Size-Weight-and-Power, real-time embedded imaging systems have become a reality. To be practical in a fielded application, an image processing sub-system must be able to conduct multiple, often highly complex tasks, in real-time. The design and construction of such systems have to address technical challenges, including real-time, low-latency processing and fixed-point algorithms in order to leverage lowest-power computing platforms. Further design complications stem from the reality that state-of-the-art image processing algorithms take very different forms, greatly complicating low-latency implementations. This dissertation presents the design and preliminary implementation of an image processing sub-system that minimizes computational complexity and power consumption by eliminating repeated transformations between processing domains. Specifically, this processing chain utilizes the LeGall 5/3 wavelet as the basis for applying multiple algorithms within a single domain. The wavelet processing chain is compared, in terms of image quality, computational cost, and power consumption, to a benchmark processing chain comprised of algorithms intended to produce high quality image results. Image quality is assessed through a subject matter expert evaluation. Computational cost is analyzed theoretically and empirically, and the power consumption is derived from the execution times and characteristics of the processing devices. The results demonstrate significant promise, but several areas for additional work have been identified.</p>

  • Conference Article
  • 10.1109/icpp.1993.72
Efficient Image Processing Algorithms on the Scan Line Array Processor
  • Aug 1, 1993
  • David Helman + 1 more

We develop efficient algorithms for low and intermediate level image processing on the scan line array processor that handles images in a scan line fashion. For low level processing, we present algorithms for block DFT, block DCT, convolution, template matching, shrinking, and expanding. These algorithms run in real-time - that is, the output lines are generated at the rate of O(m) time per line, where the required processing is based on neighborhoods of size m x m. For intermediate level processing, we present efficient algorithms for scaling, translation, connected components, and convex hulls of multiple figures.

  • Research Article
  • Cite Count Icon 35
  • 10.1007/bf01211659
A knowledge-based image-inspection system for automatic defect recognition, classification, and process diagnosis
  • Sep 1, 1994
  • Machine Vision and Applications
  • Petra Perner

Combining knowledge-based processing with image processing is a key issue in the future of the visual inspection of complex patterns such as offset prints. Often the class of the defect determines the state of the process, which must known for eliminating the cause of the defect. We describe the architecture of such a complex knowledge-based inspection system. The system has been used for defect recognition and misprint diagnosis in offset printing, but it is flexible enough for other applications. The system is based on a set of general and powerful tools for the knowledge interpretation of sensor signals. An object-oriented concept and task-dependent algorithms for efficient image processing are implemented. The paper concentrates on four points: integration of the system in the offset printing process, a description of the system architecture, knowledge acquisition, and implementation results.

  • Research Article
  • Cite Count Icon 49
  • 10.1016/j.measurement.2019.02.006
Digital image recognition based on Fractional-order-PCA-SVM coupling algorithm
  • May 18, 2019
  • Measurement
  • Lin Hu + 1 more

Digital image recognition based on Fractional-order-PCA-SVM coupling algorithm

  • Conference Article
  • Cite Count Icon 1
  • 10.1109/aeect.2013.6716474
A comparative study of signal and image processing systems for condition monitoring of milling processes using artificial intelligence
  • Dec 1, 2013
  • Milad Ahmed Elgargni + 1 more

A comparative study between two types of tool wear monitoring systems for milling processes is introduced in this paper. The suggested sensory fusion approach includes the implementation of an infrared camera, in addition to force, vibration, sound and acoustic emission sensors. The majority of the research work available in literature and industry focuses on using one dimensional signals, such as force, vibration. Two dimensional data, such as infrared and visual images, are limited in literature in relation to machining operations. This work compares between one dimensional and two dimensional data for the development of a tool condition monitoring system for milling processes. The paper presents a comparative study between the performance of signal and image processing algorithms using neural networks. Fourier Transformation and Wavelets analysis are used to process one dimensional and Two dimensional data respectively. The results indicate that two dimensional data obtained from infrared images has significant capability in comparison to one dimensional data for the detection of tool wear for the selected image and signal processing algorithms.

  • Book Chapter
  • 10.1007/3-540-44839-x_79
Parallel High-Level Image Processing on a Standard PC
  • Jan 1, 2003
  • M Fikret Ercan + 1 more

Streaming SIMD Extensions (SSE) is a unique feature embedded in the Pentium III and Pentium IV classes of microprocessors. By fully exploiting SSE, parallel algorithms can be implemented on a standard personal computer and a significant speedup can be achieved comparing to sequential code. PCs, mainly employing Intel Pentium processors, are the most commonly available and inexpensive solutions to many applications. Therefore, the performance of SSE in common image and signal processing algorithms has been studied extensively in the literature. Nevertheless, most of the studies concerned with low-level image processing algorithms, which involves pixels in pixels out type of operations. In this paper, we study higher-level image processing algorithms where image features and recognition is the output of the operations. Hough transform and Geometric hashing techniques are commonly used algorithms for this purpose. Here, their implementation using SSE are presented.

  • PDF Download Icon
  • Research Article
  • Cite Count Icon 3
  • 10.1155/2021/9198884
Image Processing Design and Algorithm Research Based on Cloud Computing
  • Jan 1, 2021
  • Journal of Sensors
  • Defu He + 1 more

Image processing technology is a popular practical technology in the computer field and has important research value for signal information processing. This article is aimed at studying the design and algorithm of image processing under cloud computing technology. This paper proposes cloud computing technology and image processing algorithms for image data processing. Among them, the material structure and performance of the system can choose a verification algorithm to achieve the final operation. Moreover, let us start with the image editing features. This article isolates software and hardware that function rationally. On this basis, the structure of a real‐time image processing system based on SOPC technology is built and the corresponding functional receiving unit is designed for real‐time image storage, editing, and viewing. Studies have shown that the design of an image processing system based on cloud computing has increased the speed of image data processing by 14%. Compared with other algorithms, this image processing algorithm has great advantages in image compression and image restoration.

  • Research Article
  • Cite Count Icon 14
  • 10.1016/j.artmed.2010.05.001
Decision support in heart failure through processing of electro- and echocardiograms
  • Jun 1, 2010
  • Artificial Intelligence in Medicine
  • Franco Chiarugi + 5 more

Signal and imaging investigations are currently key components in the diagnosis, prognosis and follow up of heart diseases. Nowadays, the need for more efficient, cost-effective and personalised care has led to a renaissance of clinical decision support systems (CDSSs). The purpose of this paper is to present an effective way of achieving a high-level integration of signal and image processing methods in the general process of care, by means of a clinical decision support system, and to discuss the advantages of such an approach. From the wide range of heart diseases, heart failure, whose complexity best highlights the benefits of this integration, has been selected. After an analysis of users' needs and expectations, significant and suitably designed image and signal processing algorithms are introduced to objectively and reliably evaluate important features involved in decisional problems in the heart failure domain. Then, a CDSS is conceived so as to combine the domain knowledge with advanced analytical tools for data processing. In particular, the relevant and significant medical knowledge and experts' knowhow are formalised according to an ontological formalism, suitably augmented with a base of rules for inferential reasoning. The proposed methods were tested and evaluated in the daily practice of the physicians operating at the Department of Cardiology, University Magna Graecia, Catanzaro, Italy, on a population of 79 patients. Different scenarios, involving decisional problems based on the analysis of biomedical signals and images, were considered. In these scenarios, after some training and 3 months of use, the CDSS was able to provide important and useful suggestions in routine workflows, by integrating the clinical parameters computed through the developed methods for echocardiographic image segmentation and the algorithms for electrocardiography processing. The CDSS allows the integration of signal and image processing algorithms into the general process of care. Feedback from end-users has been positive.

  • Conference Article
  • Cite Count Icon 9
  • 10.1109/iccsp.2014.6949932
An efficient edge detection algorithm for flame and fire image processing
  • Apr 1, 2014
  • Y Kalpana + 1 more

Edge detection is one of the preprocessing steps in image analysis. Edges characterize boundaries and edge detection is one of the most difficult tasks in image processing. Digital image processing is playing an increasingly vital role in imaging based fire monitoring systems. Since flame images are special class of images, some of the unique features of a flame may be used to identify flame edges. There are some differences between flame images and other general images; the brightness of the flame is generally much higher than the other objects while the background is comparatively dark. The expected flame edge should be clear and uninterrupted. Several known edge detection methods have been tested to identify flame edges but the results achieved are disappointing. Hence this new edge detection algorithm has been proposed for the detection of flame and fire in fire alert systems. This is an improved method which identifies the edges of the flames correctly by removing all the noises in the flames. Some research work shows that the existing methods do not emphasize the continuity and clarity of the flame and fire edges. The proposed method identifies the continuous and clear edges of the flame/fire. This process detects outlines of an object and boundaries between objects and the background in the image. Experimental results for different flame images proved the effectiveness and robustness of the algorithm.

Save Icon
Up Arrow
Open/Close
  • Ask R Discovery Star icon
  • Chat PDF Star icon

AI summaries and top papers from 250M+ research sources.

Search IconWhat is the difference between bacteria and viruses?
Open In New Tab Icon
Search IconWhat is the function of the immune system?
Open In New Tab Icon
Search IconCan diabetes be passed down from one generation to the next?
Open In New Tab Icon