Image Quality Factors Influencing Selfie Preference: The Role of Skin Color
ABSTRACT This study investigates how image quality attributes affect selfie evaluations. Selfies were captured using five smartphone camera models under three different lighting conditions (5000, 4000, and 2500 K) and participants rated their selfie preferences using 7‐point Likert scale scores. In addition to preference, participants evaluated 10 descriptive keywords related to image quality. Principal component analysis revealed that skin color was the most influential factor in determining selfie preference, as reflected in keywords related to hue and naturalness. A comparison with measured skin color datasets showed that smartphone cameras generally captured brighter and more saturated skin colors than actual measurements.
- Research Article
29
- 10.1016/j.juro.2015.07.080
- Jul 17, 2015
- Journal of Urology
Assessment of Prospectively Assigned Likert Scores for Targeted Magnetic Resonance Imaging-Transrectal Ultrasound Fusion Biopsies in Patients with Suspected Prostate Cancer
- Research Article
17
- 10.1016/j.ejrad.2023.110973
- Jul 11, 2023
- European Journal of Radiology
PurposeTo assess the impact of prostate MRI image quality by means of the Prostate Imaging Quality (PI-QUAL) score, on the identification of extraprostatic extension of disease (EPE), predicted using the EPE Grade Score, Likert Scale Score (LSS) and a clinical nomogram (MSKCCn). MethodsWe retrospectively included 105 patients with multiparametric prostate MRI prior to prostatectomy. Two radiologists evaluated image quality using PI-QUAL (≥4 was considered high quality) in consensus. All cases were also scored using the EPE Grade, the LSS, and the MSKCCn (dichotomized). Inter-rater reproducibility for each score was also assessed. Accuracy was calculated for the entire population and by image quality, considering two thresholds for EPE Grade (≥2 and = 3) and LSS (≥3 and ≥ 4) and using McNemar’s test for comparison. ResultsOverall, 66 scans achieved high quality. The accuracy of EPE Grade ranged from 0.695 to 0.743, while LSS achieved values between 0.705 and 0.733. Overall sensitivity for the radiological scores (range = 0.235–0.529) was low irrespective of the PI-QUAL score, while specificity was higher (0.775–0.986). The MSKCCn achieved an AUC of 0.76, outperforming EPE Grade (=3 threshold) in studies with suboptimal image quality (0.821 vs 0.564, p = 0.016). EPE Grade (=3 threshold) accuracy was also better in high image quality studies (0.849 vs 0.564, p = 0.001). Reproducibility was good to excellent overall (95 % Confidence Interval range = 0.782–0.924). ConclusionAssessing image quality by means of PI-QUAL is helpful in the evaluation of EPE, as a scan of low quality makes its performance drop compared to clinical staging tools.
- Preprint Article
- 10.2337/figshare.25009076
- Feb 5, 2024
<p dir="ltr">Objective. The objective of this study was to develop ANcam, a novel method for identifying acanthosis nigricans (AN) using a smartphone camera and computer-aided color analysis for noninvasive screening of people with impaired glucose tolerance (IGT). Research Design and Methods. Adult and juvenile participants with or without diagnosed type 2 diabetes were recruited in Trinidad and Tobago. After obtaining informed consent, participants’ history, demographics, anthropometrics, and A1C were collected and recorded. Three subject-matter experts independently graded pictures of the posterior neck and upper back using the ANcam smartphone application and Burke methods. A correlation matrix investigated 25 color channels for association with hyperpigmentation, and the diagnostic thresholds were determined with a receiver operating characteristic curve analysis. Results. For the 227 participants with captured images and A1C values, the cyan/magenta/yellow/black (CMYK) model color channel CMYK_K was best correlated with IGT at an A1C cut-off of 5.7% [39 mmol/mol] (R = 0.45, P <0.001). With high predictive accuracy (area under the curve = 0.854), the cut-off of 7.67 CMYK_K units was chosen, with a sensitivity of 81.1% and a specificity of 70.3%. ANcam had low inter-rater variance (F = 1.99, P = 0.137) compared to Burke grading (F = 105.71, P <0.001). ANcam detected hyperpigmentation on the neck at double the self-reported frequency. Elevated BMI was 2.9 (95% CI 1.9–4.3) times more likely, elevated blood pressure was 1.7 (95% CI 1.2–2.4) times more likely, and greater waist-to-hip ratio was 2.3 (95% CI 1.4–3.6) times more likely with AN present. Conclusion. ANcam offers a sensitive, reproducible, and user-friendly IGT screening tool to any smartphone user that performs well with most skin tones and lighting conditions. Key Points · A computer-aided screening tool for acanthosis nigricans (AN) quantifies hyperpigmentation and directs users to medical interventions. · Using a smartphone image, hyperpigmentation can be detected and associations with impaired glucose tolerance (IGT) can be made. · Of 25 color channels investigated, the cyan/magenta/yellow/black (CMYK) model CMYK_K channel was found to be best correlated with IGT. ANcam had high predictive accuracy, with both high sensitivity (81.1%), and high specificity (70.3%), detecting AN twice as many times as self-observation. · ANcam is a user-friendly tool that can be used with any skin tone and most lighting conditions, even by laypersons, to detect AN and screen for type 2 diabetes.</p>
- Preprint Article
- 10.2337/figshare.25009076.v1
- Feb 5, 2024
<p dir="ltr">Objective. The objective of this study was to develop ANcam, a novel method for identifying acanthosis nigricans (AN) using a smartphone camera and computer-aided color analysis for noninvasive screening of people with impaired glucose tolerance (IGT). Research Design and Methods. Adult and juvenile participants with or without diagnosed type 2 diabetes were recruited in Trinidad and Tobago. After obtaining informed consent, participants’ history, demographics, anthropometrics, and A1C were collected and recorded. Three subject-matter experts independently graded pictures of the posterior neck and upper back using the ANcam smartphone application and Burke methods. A correlation matrix investigated 25 color channels for association with hyperpigmentation, and the diagnostic thresholds were determined with a receiver operating characteristic curve analysis. Results. For the 227 participants with captured images and A1C values, the cyan/magenta/yellow/black (CMYK) model color channel CMYK_K was best correlated with IGT at an A1C cut-off of 5.7% [39 mmol/mol] (R = 0.45, P <0.001). With high predictive accuracy (area under the curve = 0.854), the cut-off of 7.67 CMYK_K units was chosen, with a sensitivity of 81.1% and a specificity of 70.3%. ANcam had low inter-rater variance (F = 1.99, P = 0.137) compared to Burke grading (F = 105.71, P <0.001). ANcam detected hyperpigmentation on the neck at double the self-reported frequency. Elevated BMI was 2.9 (95% CI 1.9–4.3) times more likely, elevated blood pressure was 1.7 (95% CI 1.2–2.4) times more likely, and greater waist-to-hip ratio was 2.3 (95% CI 1.4–3.6) times more likely with AN present. Conclusion. ANcam offers a sensitive, reproducible, and user-friendly IGT screening tool to any smartphone user that performs well with most skin tones and lighting conditions. Key Points · A computer-aided screening tool for acanthosis nigricans (AN) quantifies hyperpigmentation and directs users to medical interventions. · Using a smartphone image, hyperpigmentation can be detected and associations with impaired glucose tolerance (IGT) can be made. · Of 25 color channels investigated, the cyan/magenta/yellow/black (CMYK) model CMYK_K channel was found to be best correlated with IGT. ANcam had high predictive accuracy, with both high sensitivity (81.1%), and high specificity (70.3%), detecting AN twice as many times as self-observation. · ANcam is a user-friendly tool that can be used with any skin tone and most lighting conditions, even by laypersons, to detect AN and screen for type 2 diabetes.</p>
- Research Article
- 10.2352/issn.2169-2629.2019.27.17
- Oct 21, 2019
- Color and Imaging Conference
Because our sensitivity to human skin color leads to a precise chromatic adjustment, skin color has been considered a calibration target to enhance the quality of images that contain human faces. In this paper, we investigated the perceived quality of portrait images depending on how the target skin color is defined: measured, memory, digital, or CCT skin color variations. A user study was conducted; 24 participants assessed the quality of white-balanced portraits on five criteria: reality, naturalness, appropriateness, preference, and emotional enhancement. The results showed that the calibration using measured skin color best served the aspects of reality and naturalness. With regard to appropriateness and preference, digital skin color obtained the highest score. Also, the memory skin color was appropriate to calibrate portraits with emotional enhancement. In addition, the other two CCT target colors enhanced the affective quality of portrait images, but the effect was quite marginal. In the foregoing, labelled Skin Balance, this study proposes a set of alternative targets for skin color, a simple but efficient way of reproducing portrait images with affective enhancement.
- Research Article
5
- 10.55460/hi77-s19w
- Jan 1, 2015
- Journal of Special Operations Medicine
As US military combat operations draw down in Afghanistan, the military health system will shift focus to garrison- and hospital-based care. Maintaining combat medical skills while performing routine healthcare in military hospitals and clinics is a critical challenge for Combat medics. Current regulations allow for a wide latitude of Combat medic functions. The Surgeon General considers combat casualty care a top priority. Combat medics are expected to provide sophisticated care under the extreme circumstances of a hostile battlefield. Yet, in the relatively safe and highly supervised setting of contiguous US-based military hospitals, medics are rarely allowed to perform the procedures or administer medications they are expected to use in combat. This study sought to determine patients? opinions on the use of combat medics in their healthcare. Patients in hospital emergency department (EDs) were offered anonymous surveys. Examples of Combat medic skills were provided. Participants expressed agreement using the Likert scale (LS), with scores ranging from "strongly agree" (LS score, 1) to "strongly disagree" (LS score, 5). The study took place in the ED at Bayne-Jones Army Community Hospital, Fort Polk, Louisiana. Surveys were offered to adult patients when they checked into the ED or to adults with other patients. A total of 280 surveys were completed and available for analysis. Subjects agreed that Combat medic skills are important for deployment (LS score, 1.4). Subjects agreed that Combat medics should be allowed to perform procedures (LS score, 1.6) and administer medications (LS score, 1.6). Subjects would allow Combat medics to perform procedures (LS score, 1.7) and administer medications (LS score, 1.7) to them or their families. Subjects agreed that Combat medic activities should be a core mission for military treatment facilities (MTFs) (LS score, 1.6). Patients support the use of Combat medics during clinical care. Patients agree that Combat medic use should be a core mission for MTFs. Further research is needed to optimize Combat medic integration into patient healthcare.
- Research Article
4
- 10.1007/s11063-012-9275-4
- Dec 29, 2012
- Neural Processing Letters
It is realized that fixed thresholds mostly fail in two circumstances as they only search for a certain range of skin color: (i) any skin-like object may be classified as skin if skin-like colors belong to fixed threshold range; (ii) any true skin for different races may be mistakenly classified as non-skin if that skin colors do not belong to fixed threshold range. In this paper, graph cuts (GC) is first extended to skin color segmentation. Although its result is acceptable, a complex environment with skin-like objects or different skin colors or different lighting conditions often results in a partial success. It is also known that probability neural network (PNN) has the advantage of recognizing different skin colors in cluttered environments. Therefore, many images with skin-like objects or different skin colors or different lighting conditions are segmented by the proposed algorithm (i.e., the combination of GC algorithm and PNN classification with other functions, e.g., morphology filtering, labeling, area constraint). The compared results among GC algorithm, PNN classification, and the proposed algorithm are presented not only to verify the accurate segmentation of these images but also to reduce the computation time. Finally, the application to the classification of hand gestures in complex environment with different lighting conditions further confirms the effectiveness and efficiency of our method.
- Preprint Article
- 10.1101/2024.08.07.24311623
- Aug 8, 2024
- medRxiv : the preprint server for health sciences
Although hypothesized to be the root cause of the pulse oximetry disparities, skin tone and its use for improving medical therapies have yet to be extensively studied. Studies previously used self-reported race as a proxy variable for skin tone. However, this approach cannot account for skin tone variability within race groups and also risks the potential to be confounded by other non-biological factors when modeling data. Therefore, to better evaluate health disparities associated with pulse oximetry, this study aimed to create a unique baseline dataset that included skin tone and electronic health record (EHR) data. Patients admitted to Duke University Hospital were eligible if they had at least one pulse oximetry value recorded within 5 minutes before an arterial blood gas (ABG) value. We collected skin tone data at 16 different body locations using multiple devices, including administered visual scales, colorimetric, spectrophotometric, and photography via mobile phone cameras. All patients' data were linked in Duke's Protected Analytics Computational Environment (PACE), converted into a common data model, and then de-identified before publication in PhysioNet. Skin tone data were collected from 128 patients. We assessed 167 features per skin location on each patient. We also collected over 2000 images from mobile phones measured in the same controlled environment. Skin tone data are linked with patients' EHR data, such as laboratory data, vital sign recordings, and demographic information. Measuring different aspects of skin tone for each of the sixteen body locations and linking them with patients' EHR data could assist in the development of a more equitable AI model to combat disparities in healthcare associated with skin tone. A common data model format enables easy data federation with similar data from other sources, facilitating multicenter research on skin tone in healthcare. A prospectively collected EHR-linked skin tone measurements database in a common data model with emphasis on pulse oximetry disparities.
- Research Article
7
- 10.2196/34934
- Apr 22, 2022
- JMIR biomedical engineering
BackgroundMany commodity pulse oximeters are insufficiently calibrated for patients with darker skin. We demonstrate a quantitative measurement of this disparity in peripheral blood oxygen saturation (SpO2) with a controlled experiment. To mitigate this, we present OptoBeat, an ultra–low-cost smartphone-based optical sensing system that captures SpO2 and heart rate while calibrating for differences in skin tone. Our sensing system can be constructed from commodity components and 3D-printed clips for approximately US $1. In our experiments, we demonstrate the efficacy of the OptoBeat system, which can measure SpO2 within 1% of the ground truth in levels as low as 75%.ObjectiveThe objective of this work is to test the following hypotheses and implement an ultra–low-cost smartphone adapter to measure SpO2: skin tone has a significant effect on pulse oximeter measurements (hypothesis 1), images of skin tone can be used to calibrate pulse oximeter error (hypothesis 2), and SpO2 can be measured with a smartphone camera using the screen as a light source (hypothesis 3).MethodsSynthetic skin with the same optical properties as human skin was used in ex vivo experiments. A skin tone scale was placed in images for calibration and ground truth. To achieve a wide range of SpO2 for measurement, we reoxygenated sheep blood and pumped it through synthetic arteries. A custom optical system was connected from the smartphone screen (flashing red and blue) to the analyte and into the phone’s camera for measurement.ResultsThe 3 skin tones were accurately classified according to the Fitzpatrick scale as types 2, 3, and 5. Classification was performed using the Euclidean distance between the measured red, green, and blue values. Traditional pulse oximeter measurements (n=2000) showed significant differences between skin tones in both alternating current and direct current measurements using ANOVA (direct current: F2,5997=3.1170 × 105, P<.01; alternating current: F2,5997=8.07 × 106, P<.01). Continuous SpO2 measurements (n=400; 10-second samples, 67 minutes total) from 95% to 75% were captured using OptoBeat in an ex vivo experiment. The accuracy was measured to be within 1% of the ground truth via quadratic support vector machine regression and 10-fold cross-validation (R2=0.97, root mean square error=0.7, mean square error=0.49, and mean absolute error=0.5). In the human-participant proof-of-concept experiment (N=3; samples=3 × N, duration=20-30 seconds per sample), SpO2 measurements were accurate to within 0.5% of the ground truth, and pulse rate measurements were accurate to within 1.7% of the ground truth.ConclusionsIn this work, we demonstrate that skin tone has a significant effect on SpO2 measurements and the design and evaluation of OptoBeat. The ultra-low-cost OptoBeat system enables smartphones to classify skin tone for calibration, reliably measure SpO2 as low as 75%, and normalize to avoid skin tone–based bias.
- Conference Article
4
- 10.1117/12.587934
- Mar 14, 2005
Accuracy of skin segmentation algorithms is highly sensitive to changes in lighting conditions. When the lighting condition in a scene is different from that in the training examples, miss-classification rate of the skin segmentation algorithms is high. Using color constancy approach we aim to compensate for skin color variations to achieve accurate skin color segmentation. Skin color constancy is realized in an unsupervised manner by using the color changes observed on a face for different illuminations to drive the model. By training on few faces of different ethnicities, our model is able to generalize the color mapping for any unseen ethnicity. The color changes observed are used to learn the color mapping from one lighting condition to the other. These mappings are represented in a low dimensional subspace to obtain basis vector fields. Using these basis vector fields we can model the nonlinear color changes to transform skin colors in arbitrary lighting conditions to a reference lighting condition. We show the proof of concept of unsupervised skin color constancy on faces from the PIE database. Skin segmentation is performed on the color compensated faces using a Skin Distribution Map (SDM), which is trained on skin colors in reference lighting condition.
- Research Article
18
- 10.1109/tcsvt.2009.2026967
- Dec 1, 2009
- IEEE Transactions on Circuits and Systems for Video Technology
In consumer video conferencing, lighting conditions are usually not ideal thus the image qualities are poor. Lighting affects image quality on two aspects: brightness and skin tone. While there has been much research on improving the brightness of the captured images including contrast enhancement and noise removal (which can be thought of as components for brightness improvement), little attention has been paid to the skin tone aspect. In contrast, it is a common knowledge for professional stage lighting designers that lighting affects not only the brightness but also the color tone which plays a critical role in the perceived look of the host and the mood of the stage scene. Inspired by stage lighting design, we propose an active lighting system which automatically adjusts the lighting so that the image looks visually appealing. The system consists of computer controllable light emitting diode light sources of different colors so that it improves not only the brightness but also the skin tone of the face. Given that there is no quantitative formula on what makes a good skin tone, we use a data driven approach to learn a good skin tone model from a collection of photographs taken by professional photographers. We have developed a working system and conducted user studies to validate our approach.
- Research Article
1
- 10.3390/jimaging10050109
- Apr 30, 2024
- Journal of Imaging
Knowledge of a person's level of skin pigmentation, or so-called "skin tone", has proven to be an important building block in improving the performance and fairness of various applications that rely on computer vision. These include medical diagnosis of skin conditions, cosmetic and skincare support, and face recognition, especially for darker skin tones. However, the perception of skin tone, whether by the human eye or by an optoelectronic sensor, uses the reflection of light from the skin. The source of this light, or illumination, affects the skin tone that is perceived. This study aims to refine and assess a convolutional neural network-based skin tone estimation model that provides consistent accuracy across different skin tones under various lighting conditions. The 10-point Monk Skin Tone Scale was used to represent the skin tone spectrum. A dataset of 21,375 images was captured from volunteers across the pigmentation spectrum. Experimental results show that a regression model outperforms other models, with an estimated-to-target distance of 0.5. Using a threshold estimated-to-target skin tone distance of 2 for all lights results in average accuracy values of 85.45% and 97.16%. With the Monk Skin Tone Scale segmented into three groups, the lighter exhibits strong accuracy, the middle displays lower accuracy, and the dark falls between the two. The overall skin tone estimation achieves average error distances in the LAB space of 16.40±20.62.
- Research Article
1
- 10.6109/jkiice.2010.14.8.1809
- Aug 31, 2010
- The Journal of the Korean Institute of Information and Communication Engineering
In order to detect the skin color area from input images, many prior researches have divided an image into the pixels having a skin color and the other pixels. In a still image or videos, it is very difficult to exactly extract the skin pixels because lighting condition and makeup generate a various variations of skin color. In this thesis, we propose a method that improves its performance using hierarchical merging of 3D skin color model and context informations for the images having various difficulties. We first make 3D color histogram distributions using skin color pixels from many YCbCr color images and then divide the color space into 3 layers including skin color region(Skin), non-skin color region(Non-skin), skin color candidate region (Skinness). When we segment the skin color region from an image, skin color pixel and non-skin color pixels are determined to skin region and non-skin region respectively. If a pixel is belong to Skinness color region, the pixels are divided into skin region or non-skin region according to the context information of its neighbors. Our proposed method can help to efficiently segment the skin color regions from images having many distorted skin colors and similar skin colors.
- Research Article
- 10.30153/jcagst.201003.0016
- Mar 1, 2010
Taking photos become a part of everyday life due to mobile phone cameras are widespread and SLR cameras are much cheaper than before. However, most mobile phone camera doesn't have good auto exposure and white balancing algorithms. It results in over-or under-exposure, pale sky, incorrect skin color and no detail in shadow. To improve the color accuracy of the photos, the study proposed a model to process poor photos to make it looks better. It corrects color based on three important components in a typical outdoor photography. First, the sky region is detected using modified region growing technique. Second, the ground colors are corrected by auto white balancing. The face in the photo is then detected for correcting skin color. The mixture of three color correction images is based on the sky area mask and the similarity of skin color. Normal photos were perturbed randomly as uncorrected photos and their color became more acceptable by using the region-based color correction model.
- Discussion
9
- 10.1016/j.jaad.2021.09.041
- Oct 4, 2021
- Journal of the American Academy of Dermatology
Skin of color representation in medical education: An analysis of National Board of Medical Examiners' self-assessments and popular question banks
- Ask R Discovery
- Chat PDF
AI summaries and top papers from 250M+ research sources.