Abstract

In this paper, we introduce a novel Dense D2C-Net, an unobtrusive display-to-camera (D2C) communication scheme that embeds and extracts additional data via visual content through a deep convolutional neural network (DCNN). The encoding process of Dense D2C-Net establishes connections among all layers of the cover image, and fosters feature reuse to maintain the visual quality of the image. The Y channel is employed to embed binary data owing to its resilience against distortion from image compression and its lower sensitivity to color transformations. The encoder structure integrates hybrid layers that combine feature maps from the cover image and input binary data to efficiently hide the embedded data, while the addition of multiple noise layers effectively mitigates distortions caused by the optical wireless channel on the transmitted data. At the decoder, a series of 2D convolutional layers is used for extracting output binary data from the captured image. We conducted experiments in a real-world setting using a smartphone camera and a digital display, demonstrating superior performance from the proposed scheme compared to conventional DCNN-based D2C schemes across varying parameters such as transmission distance, capture angle, display brightness, and camera resolution.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call