Abstract

Firms can rely on various data protection methods to comply with the General Data Protection Regulation’s (GDPR) anonymization directive. We develop a privacy attack to estimate customers’ privacy risk and find that data protection methods commonly used in practice do not offer a reliable guarantee of privacy protection.We therefore develop a framework that describes the use of deep learning to generate synthetic data that are both (differentially) private, and useful for marketing analysts. Empirically, we apply our framework to two privacy-sensitive marketing applications in which an analyst is faced with everyday managerial practices. In contrast to GDPR’s directive to minimize data collection, we show that customers’ privacy risk can be reduced by blending into a large crowd: a “Where’s Waldo” effect. Our framework provides a data protection method with a formal privacy guarantee and allows analysts to quantify, control, and communicate privacy risk levels with stakeholders, draw meaningful insights, and share data even under privacy regulations.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call