Summary This article outlines the need for, and possible solutions to the problem of third parties’ legally untrammelled ability to collect and use identified or identifiable personal, sometimes very sensitive, health data. A solution would be implementation of a comprehensive legal framework – from international treaties through national legislation to operationalising data privacy and ethics by design at the level computer software (algorithmic) instructions. At least since the last decade of the twentieth century, digitisation of health data and creation of national electronic health records infrastructure has held the promise of enabling the attainment of such public health goals as personal health management, health care delivery, health-related research, and population health surveillance. Great advances in Big Data technology, and even more so, the algorithmic revolution, has facilitated these four goals, though not necessarily in the ways envisaged by scholars and policy-makers of the time. Thus, personal health management is supported by apps such as Apple Health; telemedicine and teleradiology systems enable health care delivery to patients wherever they are located. Health-related medical, commercial, socio-economic and socio-political research based on Big Data is booming, while national electronic health record systems allow national and international agencies to track and scrutinize health of individuals and populations. However, unregulated and rampant “datafication” of identified or identifiable personal health information about individuals collected, managed, and disseminated without their knowledge and informed consent effectively treats data subjects – us – as mere means to an end. The law has been lagging a long way behind technical and commercial development, yet it is possible to safeguard privacy and other fundamental rights of data subjects. For example, as part of the European Union's Digital Single Market Strategy, the European Parliament adopted Regulation (EU) 2016/679 on the “protection of natural persons with regard to the processing of personal data and on the free movement of such data”, and the Directive (EU) 2016/680 on the “protection of natural persons with regard to the processing of personal data by competent authorities for the purposes of the prevention, investigation, detection or prosecution of criminal offences or the execution of criminal penalties, and on the free movement of such data”. The Regulation shall apply, and the Directive will enter into force (requiring EU Member States to transpose it into their national law) in May 2018. In the United Kingdom, the Investigative Powers Act (UK) 2016 has systemically incorporated approach to statutory controls in form of proportionality and necessity tests on powers that national security and law enforcement agencies must intercept, communications, access, collect and manage massive volumes of data (known as “bulk powers”). These controls aim to create privacy safeguards for intentionally or inadvertently targeted individuals. Thus, the law seems to be “awakening”. However, a comprehensive and systematic regulatory framework of controls and protections is yet to be postulated. This article outlines an approach consisting of vertical tiers that can be implemented separately or in total. The article has two parts. The first part provides background to the interface between developments in technology and unconsented to “datafication” of our personal health from which, unbeknown to us, third party algorithms create our digital identity. The second part outlines proposal that envisages a five-tier legal framework for protection of identifiable personal health data. Protections at each tier would be discrete yet capable of integration. At the base – the design level – it possible for software engineers and computer programmers to specify precisely defined algorithmic instructions for processing personal data in accordance with privacy laws and ethical standards. At the next level – data operators and/or analysts (whether human or an automated algorithmic program) – legal tests of proportionality and necessity and ethical conduct could be implemented by legislation and embedded in the algorithm design. At the third level – individuals and organizations storing and using the data – responsibility for its integrity and security as well as privacy and ethics could be governed by legislation. At the top level, international treaties could provide for uniform standards and approach.