Abstract

In this paper, we propose an enhanced version of the Authentication with Built-in Camera (ABC) protocol by employing a deep learning solution based on built-in motion sensors. The standard ABC protocol identifies mobile devices based on the photo-response non-uniformity (PRNU) of the camera sensor, while also considering QR-code-based meta-information. During registration, users are required to capture photos using their smartphone camera. The photos are sent to a server that computes the camera fingerprint, storing it as an authentication trait. During authentication, the user is required to take two photos that contain two QR codes presented on a screen. The presented QR code images also contain a unique probe signal, similar to a camera fingerprint, generated by the protocol. During verification, the server computes the fingerprint of the received photos and authenticates the user if (i) the probe signal is present, (ii) the metadata embedded in the QR codes is correct and (iii) the camera fingerprint is identified correctly. However, the protocol is vulnerable to forgery attacks when the attacker can compute the camera fingerprint from external photos, as shown in our preliminary work. Hence, attackers can easily remove their PRNU from the authentication photos without completely altering the probe signal, resulting in attacks that bypass the defense systems of the ABC protocol. In this context, we propose an enhancement to the ABC protocol, using motion sensor data as an additional and passive authentication layer. Smartphones can be identified through their motion sensor data, which, unlike photos, is never posted by users on social media platforms, thus being more secure than using photographs alone. To this end, we transform motion signals into embedding vectors produced by deep neural networks, applying Support Vector Machines for the smartphone identification task. Our change to the ABC protocol results in a multi-modal protocol that lowers the false acceptance rate for the attack proposed in our previous work to a percentage as low as 0.07%. In this paper, we present the attack that makes ABC vulnerable, as well as our multi-modal ABC protocol along with relevant experiments and results.

Highlights

  • Rapid advancement of mobile device technology, such as development of highresolution cameras, contributes to a large volume of data shared across the World Wide Web through social media platforms and other online environments

  • The empirical results show that the Authentication with Built-in Camera (ABC) protocol is still vulnerable, regardless of the total number of images considered for photo-response non-uniformity (PRNU) estimation

  • We note that the accuracy of the multi-modal ABC protocol based on convolutional neural networks (CNNs) embeddings is around 99%, irrespective of the kernel type or the regularization parameter value

Read more

Summary

Introduction

Rapid advancement of mobile device technology, such as development of highresolution cameras, contributes to a large volume of data shared across the World Wide Web through social media platforms and other online environments. Zhongjie et al [6] proposed the Authentication with Built-in Camera (ABC) protocol, based on a special characteristic of the camera sensor, namely the photo-response non-uniformity (PRNU) [7]. The ABC protocol introduced by Zhongjie et al [6] uses the camera fingerprint as the main authentication factor and is composed of two phases, a registration phase in which the PRNU fingerprint of the device is computed and stored, and secondly, an authentication phase. A registered device takes photos of two QR codes presented on a screen and sends them to a server for identification. The server performs a set of tests consisting of QR code metadata validation, camera fingerprint identification and forgery detection

Objectives
Methods
Results
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call