Abstract

The ever-increasing volume of vehicular data has enabled different service providers to access and monetize in-vehicle network data of millions of drivers. However, such data often carry personal or even potentially sensitive information, and therefore service providers either need to ask for drivers' consent or anonymize such data in order to comply with data protection regulations. In this paper, we show that both fine-grained consent control as well as the adequate anonymization of in-network vehicular data are very challenging. First, by exploiting that in-vehicle sensor measurements are inherently interdependent, we are able to effectively i) re-identify a driver even from the raw, unprocessed CAN data with 97% accuracy, and ii) reconstruct the vehicle's complete location trajectory knowing only its speed and steering wheel position. Since such signal interdependencies are hard to identify even for data controllers, drivers' consent will arguably not be informed and hence may become invalid. Second, we show that the non-systematic application of different standard anonymization techniques (e.g., aggregation, suppression, signal distortion) often results in volatile, empirical privacy guarantees to the population as a whole but fails to provide a strong, worst-case privacy guarantee to every single individual. Therefore, we advocate the application of principled privacy models (such as Differential Privacy) to anonymize data with strong worst-case guarantee.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call