Abstract

This paper presents a hardware efficient pixel-domain just-noticeable difference (JND) model and its hardware architecture implemented on an FPGA. This JND model architecture is further proposed to be part of a low complexity pixel-domain perceptual image coding architecture, which is based on downsampling and predictive coding. The downsampling is performed adaptively on the input image based on regions-of-interest (ROIs) identified by measuring the downsampling distortions against the visibility thresholds given by the JND model. The coding error at any pixel location can be guaranteed to be within the corresponding JND threshold in order to obtain excellent visual quality. Experimental results show the improved accuracy of the proposed JND model in estimating visual redundancies compared with classic JND models published earlier. Compression experiments demonstrate improved rate-distortion performance and visual quality over JPEG-LS as well as reduced compressed bit rates compared with other standard codecs such as JPEG 2000 at the same peak signal-to-perceptible-noise ratio (PSPNR). FPGA synthesis results targeting a mid-range device show very moderate hardware resource requirements and over 100 Megapixel/s throughput of both the JND model and the perceptual encoder.

Highlights

  • Advances in sensor and display technologies have led to rapid growth in data bandwidth in high-performance imaging systems

  • The contrast masking and luminance masking effects are combined into the final just-noticeable difference (JND) value in the new approach, i.e., using the nonlinear additivity model for masking (NAMM) operator for texture/smooth regions and the maximum operator for edge regions

  • The proposed JND model and architecture are suitable for implementation on FPGAs for real-time and low complexity embedded systems

Read more

Summary

Introduction

Advances in sensor and display technologies have led to rapid growth in data bandwidth in high-performance imaging systems. Compression algorithms like JPEG-LS operating in the pixel domain can be adapted to exploit the pixel-domain JND models, e.g., by setting the quantization step size adaptively based on the JND thresholds. A new region-adaptive pixel-domain JND model based on efficient local operations is proposed for a more accurate detection of visibility thresholds compared with the classic JND model [9] and for a reduced complexity compared with more recent ones [11,12]. A low complexity pixel-domain perceptual image coder [14] is used to exploit the visibility thresholds given by the proposed JND model. The coding algorithm addresses both coding efficiency and visual quality issues in conventional pixel-domain coders in a framework of adaptive downsampling guided by perceptual regions-of-interest (ROIs) based on JND thresholds.

Background in Pixel-Domain JND Modeling
Luminance Masking Estimation
Contrast Masking Estimation
Formulation of JND Threshold
Proposed JND Model
Edge and Texture Detection
Region-Based Weighting of Visibility Thresholds due to Contrast Masking
Final JND Threshold
Overview of Proposed JND Hardware Architecture
Row Buffer
Pipelined Weighted-Sum Module
Luminance Masking Function
Contrast Masking Function
Edge-Texture-Smooth Function
Edge Detection
High Contrast Activity
JND Calculation Function
JND-Based Pixel-Domain Perceptual Image Coding Hardware Architecture
Top-Level Architecture of the JND-Based Pixel-Domain Perceptual Encoder
Encoder Front End
Pixel Processing Order Conversion
Downsampling and ROI Decision
ROI-Based Pixel Selection
Predictive Coding and Output Bitstream
Analysis of Integer Approximation of the Gaussian Kernel
Performance of the Proposed JND Model
Complexity Comparison of Proposed JND Model and Existing JND Models
E: Sobel gradients
Conclusions
Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.