The rise of mobile technologies in the last decade has led to vast amounts of location information generated by individuals. From the knowledge discovery point of view, these data are quite valuable, but the inherent personal information in the data raises privacy concerns. There exists many algorithms in the literature to satisfy the privacy requirements of individuals, by generalizing, perturbing, and suppressing their data. Current techniques that try to ensure a level of indistinguishability between trajectories in a dataset are direct applications of $$k$$ k -anonymity, thus suffer from the shortcomings of $$k$$ k -anonymity such as the lack of diversity in sensitive regions. Moreover, these techniques fail to incorporate some common background knowledge, an adversary might have such as the underlying map, the traffic density, and the anonymization algorithm itself. We propose a new privacy metric $$p$$ p -confidentiality that ensures location diversity by bounding the probability of a user visiting a sensitive location with the $$p$$ p input parameter. We perform our probabilistic analysis based on the background knowledge of the adversary. Instead of grouping the trajectories, we anonymize the underlying map, that is, we group nodes (points of interest) to create obfuscation areas around sensitive locations. The groups are formed in such a way that the parts of trajectories entering the groups, coupled with the adversary background, do not increase the adversary's belief in violating the $$p$$ p -confidentiality. We then use the map anonymization as a model to anonymize the trajectories. We prove that our algorithm is resistant to reverse-engineering attacks when the statistics required for map anonymization is publicly available. We empirically evaluate the performance of our algorithm and show that location diversity can be satisfied effectively.