Automated Residual Plot Assessment With the R Package autovi and the Shiny Application autovi.web
ABSTRACTVisual assessment of residual plots is a common approach for diagnosing linear models, but it relies on manual evaluation, which does not scale well and can lead to inconsistent decisions across analysts. The lineup protocol, which embeds the observed plot among null plots, can reduce subjectivity but requires even more human effort. In today's data‐driven world, such tasks are well suited for automation. We present a new R package that uses a computer vision model to automate the evaluation of residual plots. An accompanying Shiny application is provided for ease of use. Given a sample of residuals, the model predicts a visual signal strength (VSS) and offers supporting information to help analysts assess model fit.
- Research Article
10
- 10.1177/1558944717730604
- Sep 16, 2017
- HAND
Measurement of wrist range of motion (ROM) is important to all aspects of treatment and rehabilitation of upper extremity conditions. Recently, gyroscopes have been used to measure ROM and may be more precise than manual evaluations. The purpose of this study was to evaluate the use of the iPhone gyroscope application and compare it with use of a goniometer, specifically evaluating its accuracy and ease of use. A cross-sectional study evaluated adult Caucasian participants, with no evidence of wrist pathology. Wrist ROM measurements in 306 wrists using the 2 methods were compared. Demographic information was collected including age, sex, and occupation. Analysis included mixed models and Bland-Altman plots. Wrist motion was similar between the 2 methods. Technical difficulties were encountered with gyroscope use. Age was an independent predictor of ROM. Correct measurement of ROM is critical to guide, compare, and evaluate treatment and rehabilitation of the upper extremity. Inaccurate measurements could mislead the surgeon and harm patient adherence with therapy or surgeon instruction. An application used by the patient could improve adherence but needs to be reliable and easy to use. Evaluation is necessary before utilization of such an application. This study supports revision of the application on the iPhone to improve ease of use.
- Research Article
11
- 10.1097/hmr.0000000000000222
- Oct 8, 2018
- Health Care Management Review
Electronic health records (EHRs) have potential to improve quality, health outcomes, and efficiency, but little is known about the mechanisms through which these improvements occur. One potential mechanism could be that EHRs improve care team communication and coordination, leading to better outcomes. To test this hypothesis, we examine whether ease of EHR use is associated with better relational coordination (RC), a measure of team communication and coordination, among primary care team members. Surveys of adult primary care team members (n = 304) of 16 practices of two accountable care organizations in Chicago and Los Angeles were analyzed. The survey included a validated measure of RC and a measure of ease of EHR use from a national survey. Linear regression models estimated the association of ease of EHR use and RC, controlling for care site and patient demographics and accounting for cluster-robust standard errors. An interaction term tested a differential association of ease of EHR use and RC for primary care providers (PCPs) versus non-PCPs. Ease of EHR use (mean = 3.5, SD = 0.6, range: 0-4) and RC were high (mean = 4.0, SD = 0.7, range: 0-5) but differed by occupation. In regression analyses, a 1-point increase in ease of EHR use was associated with a 0.36 point higher RC score (p = .001). The association of ease of EHR and RC use was stronger for non-PCPs than PCPs. Ease of EHR use is associated with better RC among primary care team members, and the benefits accrue more to non-PCPs than to PCPs. Ensuring that clinicians and staff experience EHRs as easy to use for accessing and integrating data and for communication may produce gains in efficiency and outcomes through high RC. Future studies should examine whether interventions to improve EHR usability can lead to improved RC and patient outcomes.
- Dissertation
- 10.25534/tuprints-00013245
- Aug 21, 2020
In recent years, learning-based methods have become the dominant approach to solving computer vision tasks. A major reason for this development is their automatic adaptation to the particularities of the task at hand by learning a model of the problem from (training) data. This approach assumes that the training data closely resemble the data encountered during testing. Successfully applying a learning-based algorithm in a wide range of real-world scenarios thus requires collecting a large set of training data, which models the complex phenomena encountered in the real world and covers rare, but critical edge cases. For many tasks in computer vision, however, the human effort required severely limits the scale and diversity of datasets. A promising approach to reducing the human effort involved is data synthesis, by which considerable parts of the collection and annotation process can be automated. Employing synthetic data, however, poses unique challenges: first, synthesis is only as useful as methods are able to capitalize on virtually infinite amounts of data and arbitrary precision. Second, synthetic data must be sufficiently realistic for being useful in real-world scenarios. However, modeling real world phenomena within the synthesis can be even more laborious than collection and annotation of real datasets in the first place. In this dissertation, we address these challenges in two ways: first, we propose to adapt data-driven methods to take advantage of the unique features of synthetic data. Specifically, we develop a method that reconstructs the surface of objects from a single view in uncalibrated illumination conditions. The method estimates illumination conditions and synthesizes suitable training data at test time, enabling reconstructions at unprecedented detail. Furthermore, we develop a memory-efficient approach for the reconstruction of complete 3D shapes from a single view. This way, we leverage the high precision available through 3D CAD models and obtain more accurate and detailed reconstructions than previous approaches. Second, we propose to tap into computer games for creating ground truth for a variety of visual perception tasks. Open world computer games mimic the real world and feature a large diversity paired with high realism. Since source code is not available for commercial games, we devise a technique to intercept the rendering pipeline during game play and use the rendering resources for identifying objects in rendered images. As there is only limited semantic information available at the level of interception and manual association of resources with semantic classes is still necessary, we develop a method to speed up the annotation dramatically by recognizing shared resources and automatically propagating annotations across the dataset. Leveraging the geometric information available through the rendering process, we further collect ground truth for optical flow, visual odometry, and 3D scene layout. The synthesis of data from computer games reduces the human annotation effort significantly and allows creating synthetic datasets that model the real world at unprecedented scale. The ground truth for multiple visual perception tasks enables deeper analysis of current methods and the development of novel approaches that reason about multiple tasks holistically. For both the adaptation of data-driven methods as well as the datasets derived from computer games, we demonstrate significant performance improvements through quantitative and qualitative evaluations.
- Research Article
1
- 10.1093/humrep/deab130.010
- Aug 6, 2021
- Human Reproduction
Study question Can LensHooke X1 PRO semen analyzer be used to evaluate sperm morphology in men with infertility? Summary answer Morphology results generated by X1 PRO are highly reliable when normal sperm forms are ≥4% and therefore they can be reported in such cases . What is known already Most laboratories rely on manual evaluation of sperm morphology smears, which is a time-consuming procedure and its results are subjected to a relatively high variability. However, in recent years the computer-assisted semen analyzers are being increasingly used to evaluate sperm morphology. The X1 PRO semen quality analyzer was designed for in vitro diagnostic use to analyze sperm concentration, total, progressive and non-progressive motility as well as sperm morphology based on WHO 5th edition criteria. Evaluation of sperm morphology using X1 PRO based on AIOM (Artificial Intelligence Optical Microscopic)-based technology requires no fixation steps or staining unlike the manual method. Study design, size, duration This cross-sectional study used 31 semen samples from 8 normozoospermic healthy volunteers and 5 infertile men with a minimum abstinence period between 2 - 3 days. While the 8 healthy semen donors produced a total of 26 ejaculates, which were split into 88 aliquots, the 5 infertile patients produced 5 ejaculates that were split into 13 aliquots. Participants/materials, setting, methods A total of 101 aliquots were prepared from the native semen samples either by dilution or concentration using seminal plasma of the respective donors. Automated semen analysis was performed by the X1 PRO semen analyzer and the results of sperm morphology were compared with manual morphology results using Diff-Quik staining. Statistical analysis was carried out to calculate the positive predictive value (PPV) and negative predictive value (NPV) of X1 PRO semen analyzer. Main results and the role of chance The X1 PRO sperm morphology results show a weak non-significant (P = 0.2441) correlation (r = 0.119) with the manual results. However, X1 PRO demonstrated a high PPV (97.7%) and a low NPV (9.1%) for correct assessment of sperm morphology (≥4%) when compared to manual results. Due to its high PPV, laboratories can report the morphology results generated by X1 PRO in all such cases when normal sperm forms are ≥4%. However, a manual evaluation is necessary in patients with abnormal morphology (<4%). Limitations, reasons for caution One of the limitation of this study is that X1 PRO morphology values did not correlate with manual results. The low NPV seen in our study is due to the inclusion of very few samples with abnormal sperm forms (<4%) in the analysis. Wider implications of the findings: The X1 PRO’s combination of speed, ease of use, accuracy and portability makes it a good choice of device for small medical offices to large IVF centers. High PPV of X1 PRO allows it to correctly identify normal sperm forms for diagnostic use. Trial registration number 18–771
- Research Article
- 10.1117/12.3006471
- Apr 2, 2024
- Proceedings of SPIE--the International Society for Optical Engineering
Medical image auto-segmentation techniques are basic and critical for numerous image-based analysis applications that play an important role in developing advanced and personalized medicine. Compared with manual segmentations, auto-segmentations are expected to contribute to a more efficient clinical routine and workflow by requiring fewer human interventions or revisions to auto-segmentations. However, current auto-segmentation methods are usually developed with the help of some popular segmentation metrics that do not directly consider human correction behavior. Dice Coefficient (DC) focuses on the truly-segmented areas, while Hausdorff Distance (HD) only measures the maximal distance between the auto-segmentation boundary with the ground truth boundary. Boundary length-based metrics such as surface DC (surDC) and Added Path Length (APL) try to distinguish truly-predicted boundary pixels and wrong ones. It is uncertain if these metrics can reliably indicate the required manual mending effort for application in segmentation research. Therefore, in this paper, the potential use of the above four metrics, as well as a novel metric called Mendability Index (MI), to predict the human correction effort is studied with linear and support vector regression models. 265 3D computed tomography (CT) samples for 3 objects of interest from 3 institutions with corresponding auto-segmentations and ground truth segmentations are utilized to train and test the prediction models. The five-fold cross-validation experiments demonstrate that meaningful human effort prediction can be achieved using segmentation metrics with varying prediction errors for different objects. The improved variant of MI, called MIhd, generally shows the best prediction performance, suggesting its potential to indicate reliably the clinical value of auto-segmentations.
- Book Chapter
7
- 10.1007/978-3-030-30793-6_29
- Jan 1, 2019
Knowledge Graphs are used in an increasing number of applications. Although considerable human effort has been invested into making knowledge graphs available in multiple languages, most knowledge graphs are in English. Additionally, regional facts are often only available in the language of the corresponding region. This lack of multilingual knowledge availability clearly limits the porting of machine learning models to different languages. In this paper, we aim to alleviate this drawback by proposing THOTH, an approach for translating and enriching knowledge graphs. THOTH extracts bilingual alignments between a source and target knowledge graph and learns how to translate from one to the other by relying on two different recurrent neural network models along with knowledge graph embeddings. We evaluated THOTH extrinsically by comparing the German DBpedia with the German translation of the English DBpedia on two tasks: fact checking and entity linking. In addition, we ran a manual intrinsic evaluation of the translation. Our results show that THOTH is a promising approach which achieves a translation accuracy of 88.56%. Moreover, its enrichment improves the quality of the German DBpedia significantly, as we report +18.4% accuracy for fact validation and +19% F\(_1\) for entity linking.
- Dissertation
2
- 10.18174/511122
- Oct 6, 2021
Interactive machine vision for wildlife conservation
- Research Article
4
- 10.1109/tpami.2003.1206508
- Jul 1, 2003
- IEEE Transactions on Pattern Analysis and Machine Intelligence
THE last 10 years have witnessed rapid growth in the popularity of graphical models, most notably Bayesian networks, as a tool for representing, learning, and computing complex probability distributions. Graphical models provide an explicit representation of the statistical dependencies between the components of a complex probability model, effectively marrying probability theory and graph theory. As Jordan puts it in [2], graphical models are “a natural tool for dealing with two problems that occur throughout applied mathematics and engineering—uncertainty and complexity—and, in particular, they are playing an increasingly important role in the design and analysis of machine learning algorithms.” Graphical models provide powerful computational support for the Bayesian approach to computer vision, which has become a standard framework for addressing vision problems. Many familiar tools from the vision literature, such as Markov random fields, hidden Markov models, and the Kalman filter, are instances of graphical models. More importantly, the graphical models formalism makes it possible to generalize these tools and develop novel statistical representations and associated algorithms for inference and learning. The history of graphical models in computer vision follows closely that of graphical models in general. Research by Pearl [3] and Lauritzen [4] in the late 1980s played a seminal role in introducing this formalism to areas of AI and statistical learning. Not long after, the formalism spread to fields such as statistics, systems engineering, information theory, pattern recognition, and, among others, computer vision. One of the earliest occurrences of graphical models in the vision literature was a paper by Binford et al. [1]. The paper described the use of Bayesian inference in a hierarchical probability model to match 3D object models to groupings of curves in a single image. The following year marked the publication of Pearl’s influential book [3] on graphical models. Since then, many technical papers have been published in IEEE journals and conference proceedings that address different aspects and applications of graphical models in computer vision. Our goal in organizing this special section was to demonstrate the breadth of applicability of the graphical models formalism to vision problems. Our call for papers in February 2002 produced 16 submissions. After a careful review process, we selected six papers for publication, including five regular papers, and one short paper. These papers reflect the state-of-the-art in the use of graphical models in vision problems that range from low-level image understanding to high-level scene interpretation. We believe these papers will appeal both to vision researchers who are actively engaged in the use of graphical models and machine learning researchers looking for a challenging application domain. The first paper in this section is “Stereo Matching Using Belief Propagation” by J. Sun, N.-N. Zheng, and H.-Y. Shum. The authors describe a new stereo algorithm based on loopy belief propagation, a powerful inference technique for complex graphical models in which exact inference is intractable. They formulate the dense stereo matching problem as MAP estimation on coupled Markov random fields and obtain promising results on standard test data sets. One of the benefits of this formulation, as the authors demonstrate, is the ease with which it can be extended to handle multiview stereo matching. In their paper “Statistical Cue Integration of DAG Deformable Models” S.K. Goldenstein, C. Vogler, and D. Metaxas describe a scheme for combining different sources of information into estimates of the parameters of a deformable model. They use a DAG representation of the interdependencies between the nodes in a deformable model. This framework supports the efficient integration of information from edges and other cues using the machinery of affine arithmetic and the propagation of uncertainties. They present experimental results for a face tracking application. Y. Song, L. Goncalves, and P. Perona describe, in their paper “Unsupervised Learning of Human Motion,” a method for learning probabilistic models of human motion from video sequences in cluttered scenes. Two key advantages of their method are its unsupervised nature, which can mitigate the need for tedious hand labeling of data, and the utilization of graphical model constraints to reduce the search space when fitting a human figure model. IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, VOL. 25, NO. 7, JULY 2003 785
- Conference Article
3
- 10.1109/icscet.2018.8537299
- Jan 1, 2018
Vehicles were invented to save the time and efforts of humans, but the same vehicles proved to be fatal as count went on increasing. Thus it is mandatory to resolve these issues of accidents caused by vehicles and create a smart system which is user friendly which aims to provide data such as temperature, humidity, directions, journey travelled etc to users, to facilitate communication between two vehicles so as to prevent unusual braking caused by two vehicles in proximity thus preventing accidents. The use of NFC (Near Field Connect) will smartly help the user to follow the laws by default thus creating a robust traffic regulatory system. Also the use of two independent server of regulatory system as well as the user server will maintain the user privacy of the data.
- Book Chapter
5
- 10.1007/978-981-16-0965-7_72
- Jan 1, 2021
Nowadays, vehicle monitoring is emerging as a very tedious job, which requires the maintenance of the record or by recalling the date again and again for the service, and one more problem is tracking the vehicle location for providing better security and safety measures during the travelling. In both cases, it requires more human effort. The proposed model uses the novel technologies like IoT, cloud computing and machine learning. IoT allows various devices to interact and collect data like distance travelled, lubricant level, tyre conditions, smoke emission, other hardware parts conditions and also global positioning system (GPS) to track vehicle location. This data will be collected from various sensors like IR (infrared), MQ 6 sensor, HC SR 04 ultrasonic sensor, light-dependent resistor (LDR) sensor and stored in the cloud storage system. The machine learning algorithm is used to train the proposed model by using the samples data which is collected from the real-time vehicle service stations for service monitoring and GPS data for vehicle tracking purpose. Then this trained model is used to predict the vehicle’s condition based on that it will suggest the next date of service. This will help us to condense the quantity of human effort required to predict the vehicle service date. By use of previously fed data and algorithms used to analyse, the model is capable of providing the efficient result. Finally, the collected data is stored in the cloud storage and used to forecast upcoming service date, and all these activities like vehicle service date and GPS location data are provided through an Android application for ease of use to the user and the service provider for their need.
- Book Chapter
3
- 10.4018/978-1-7998-8413-2.ch006
- Jan 1, 2022
Social media has become part of daily life in the modern world. News media companies (NMC) use social network sites including Facebook pages to let net users keep updated. Public expression is important to NMC for making valuable journals, but it is not cost-effective to collect millions of feedback by human effort, which can instead be automated by sentiment analysis. This chapter presents a mobile application called Facemarize that summarizes the contents of news media Facebook pages using sentiment analysis. The sentiment of user comments can be quickly analyzed and summarized with emotion detection. The sentiment analysis achieves an accuracy of over 80%. In a survey with 30 participants including journalists, journalism students, and journalism graduates, the application gets at least 4.9 marks (in a 7-point Likert scale) on the usefulness, ease of use, ease of learning, and satisfaction with a mean reliability score of 3.9 (out of 5), showing the effectiveness of the application.
- Research Article
- 10.3390/ijms26188855
- Sep 11, 2025
- International Journal of Molecular Sciences
Traumatic brain injury (TBI) is one of the most common forms of neurotrauma, accompanied by significant disruptions in neuronal homeostasis and intercellular communication. A key protein involved in these processes is connexin 43 (Cx43), which facilitates the formation of gap junctions in the astrocytic network. In this study, using confocal and immunofluorescence microscopy, ultrastructural analysis, and molecular modeling, we investigated the dynamics of Cx43 expression and structural changes in neuroglia during various post-traumatic periods following TBI. It was shown that in the acute phase, 24 h post-injury, there is a reduction in Cx43 expression, accompanied by apoptotic neuronal degradation, disruption of nuclear NeuN localization, and destruction of cellular ultrastructure. By 7 days post-injury, a significant increase in Cx43 levels was observed, along with the formation of protein aggregates associated with pronounced reactive astrogliosis. Peripheral blood analysis revealed persistent neutrophilia, lymphopenia, and reduced monocyte levels, reflecting a systemic inflammatory response and immunosuppression, which was corroborated by a custom-trained neural network-based computer vision model. Linear regression and correlation analyses further identified a strong positive association between normalized monocyte levels and Cx43 expression, a moderate negative correlation with lymphocytes, and no significant correlation with neutrophils. Using a custom-built computer vision model, we confirmed these hematological trends and detected subtle changes, such as early increases in platelet counts, that were not captured by manual evaluation. The model demonstrated strong performance in classifying common blood cell types and proved to be a valuable tool for monitoring dynamic post-traumatic shifts in blood. Molecular dynamics modeling of Cx43 identified a pH-dependent mechanism of conformational reorganization under post-traumatic acidosis, mediated by the interaction between protonated His142 and Glu103. This mechanism mimics the structural consequences of the pathogenic E103K mutation and may play a critical role in the neurotoxic effects of Cx43 in TBI. These findings highlight the complexity of Cx43 regulation under traumatic conditions and its potential significance as a target for neuroprotective therapy.
- Research Article
1
- 10.1007/s00521-024-10238-7
- Aug 22, 2024
- Neural Computing and Applications
Ensuring environmental safety and regulatory compliance at Department of Energy (DOE) sites demands an efficient and reliable detection system for low-level nuclear waste (LLW). Unlike existing methods that rely on human effort, this paper explores the integration of computer vision algorithms to automate the identification of such waste across DOE facilities. We evaluate the effectiveness of multiple algorithms in classifying nuclear waste materials and their adaptability to newly emerging LLW. Our research introduces and implements five state-of-the-art computer vision models, each representing a different approach to the problem. Through rigorous experimentation and validation, we evaluate these algorithms based on performance, speed, and adaptability. The results reveal a noteworthy trade-off between detection performance and adaptability. YOLOv7 shows the best performance and requires the highest effort to detect new LLW. Conversely, OWL-ViT has lower performance than YOLOv7 and requires minimal effort to detect new LLW. The inference speed does not strongly correlate with performance or adaptability. These findings offer valuable insights into the strengths and limitations of current computer vision algorithms for LLW detection. Each developed model provides a specialized solution with distinct advantages and disadvantages, empowering DOE stakeholders to select the algorithm that aligns best with their specific needs.
- Research Article
26
- 10.1016/j.ergon.2021.103218
- Oct 1, 2021
- International Journal of Industrial Ergonomics
Development of a fully automated RULA assessment system based on computer vision
- Book Chapter
1
- 10.1007/978-3-030-74608-7_14
- Jan 1, 2021
Several manual lifting evaluation tools are currently available to analyze mono-task jobs, yet most jobs involve multiple varying tasks. Therefore, a summation of mono-task analysis may not be an accurate representation of the degree of compressive forces and stress placed on the spine. The Lifting Fatigue Failure Tool (LiFFT) has been adapted from the fatigue failure theory (FFT) and is capable of both mono-task and cumulative task evaluation. The FFT details cumulative damage of the applied stress and the number of cycles to failure, therefore calculating a representative spinal compression is important in applying the corresponding limits. The original Gallagher method only requires three variables to use the LiFFT: the weight of the load, horizontal distance, and repetition per day. Other methods of applying the tool have emerged to achieve a more accurate calculation of spinal compression. The Potvin method includes a vertical height of the load and the 3DSSPP method uses digital human modeling (DHM) to calculate spine compression. The objective of this study was to compare the different methods of calculating spine compression for entry into the LiFFT to determine the variance in outputs. The results showed that the Gallagher method is best suited for lifts that do not require significant vertical postural changes whereas the Potvin and 3DSSPP methods are able to assess more complex lifts. Although DHM is the gold standard, the Potvin method is preferred for practitioners due to its ease of use. Overall, the LiFFT is a practical, effective, and practitioner friendly tool capable of predicting the risk about the low back in simple and complex manual lift evaluations.
- Ask R Discovery
- Chat PDF
AI summaries and top papers from 250M+ research sources.