Abstract
How much can you tell from a face? Quite a lot, it would seem. Technology has moved on from detecting faces in an image, to being able to recognise their identity or analyse them to extract phenotypic features. The latter approach, facial analysis, can provide clinically meaningful results such as detection of acromegaly, a rare hormonal disorder, but at what cost? Two feasibility studies on the clinical use of facial analysis have been published in The Lancet Digital Health. In the first study, Porras and colleagues developed a machine learning-based facial phenotyping model that can analyse photographs of children (<21 years) and determine their risk of presenting with a genetic syndrome (such as Williams-Beuren syndrome) based on facial dysmorphology. The model had a mean accuracy of 88% across age, sex, and race or ethnicity, and could prove essential for early risk assessment, particularly in low-income and middle-income countries where access to genetic screening and specialist services is scarce. In the second study, Hoti and colleagues validated the PainChek Infant App for assessing medical procedural pain based on facial expressions. Using video segments of infants (2·2–6·9 months) being immunised, they show that the results of this application correlated well with manual pain assessments, and could be used for early detection and management of infant procedural pain. Both tools can be administered at point-of-care through a smartphone application. Other clinical uses for automated facial analysis have also been explored. Facial analysis integrated into a sensing system has been used to autonomously and pervasively monitor delirious and non-delirious patients in intensive care units and their environment. Methods of deployment have also expanded. NeckFace—a neck-mounted wearable—can track changes in facial expressions using in-built cameras, with the researchers suggesting that changes in expressions could be indicative of changes in emotional state, and could allow individuals to track their mental health. However, these tools are not without limitations. Porras and colleagues found that their facial phenotyping model had lower accuracy for photographs of African and Asian children (compared with white and Hispanic children), probably due to these groups being underrepresented in the dataset. Furthermore, although not intended to supplant genetic testing, the model was not compared with assessments of trained geneticists, rendering the model's additive value to clinical judgement and user experience unconfirmed. While Hoti and colleagues did incorporate trained assessors into their study, evaluation was conducted on an all-white dataset; this bias limits the generalisability of the findings until verified in a more heterogeneous dataset. Global scale-up of facial analysis tools should factor in that positive results might drive demand for formal assessments or treatment; for those unable to access this specialist care, these tools could further widen health inequities. Consideration should also be given to the consequences of algorithmic errors such as false negatives, and the complexity of facial expressions (for instance, unless a source of pain is known or suspected, facial expressions of infants in pain vs non-pain related distress might not be easy to differentiate). There are also unresolved ethical questions surrounding these tools, such as those concerning data privacy, security, and consent. The scientific community has called for researchers to evaluate the ethical ramifications of their work and how tools and datasets could potentially be misused. Likewise, academic publishers have been tasked with establishing an independent ethics boards to consult on facial analysis studies. An additional question that remains to be addressed is the lack of stringent regulation of facial analysis tools and resultant biometric data—an absolute must for public trust. Despite the potential gains, the field of automated facial analysis has several challnges that must be dealt with. Tools should be transparently reported so the public fully understand their limits and are made aware of potentially negative consequences (particularly in resource-limited settings). Additionally, gaps for further work should be identified, such as the need for testing across different demographic groups and increasing access to specialist services. Researchers and academic publishers should play an active role in assessing the ethical implications of facial analysis studies. Finally, government and regulatory bodies must introduce strong legislation and guidance to protect human rights and ensure accountability.
Highlights
How much can you tell from a face? Quite a lot, it would seem
Two feasibility studies on the clinical use of facial analysis have been published in The Lancet Digital Health
The model had a mean accuracy of 88% across age, sex, and race or ethnicity, and could prove essential for early risk assessment, in low-income and middle-income countries where access to genetic screening and specialist services is scarce
Summary
How much can you tell from a face? Quite a lot, it would seem. Technology has moved on from detecting faces in an image, to being able to recognise their identity or analyse them to extract phenotypic features. The latter approach, facial analysis, can provide clinically meaningful results such as detection of acromegaly, a rare hormonal disorder, but at what cost? Two feasibility studies on the clinical use of facial analysis have been published in The Lancet Digital Health.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.