Abstract

The Spatial Rich Model (SRM) generates powerful steganalysis features, but has high computational complexity since it requires calculating tens of thousands of convolutions with image noise residuals. Practical applications dealing with a massive amount of image transferred through the Internet would suffer a long computing time if using CPU. To accelerate the steganalysis, we present a parallel SRM feature extraction algorithm based on GPU architecture. We exploit parallelism of the algorithm, modify the original SRM extraction algorithm and employ some strategies to avoid the disadvantage of its sequentiality. Some OpenCL optimization technologies are also used to accelerate the extraction process, such as convolution unrolling, combined memory access, split-merge strategy for co-occurrence matrix calculation. The experimental results show that the speed of the proposed parallel extraction algorithm for different size images is 25~55 times faster than the original single thread algorithm. In addition, when using AMD GPU HD 6850, our algorithm runs 2~4.2 times faster than using a Intel Quad-core CPU. This indicates our algorithm makes good use of the GPU cores.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.