An In-depth Analysis of Rendered Models using Blender : A Research Result
The research investigated the efficiency of a novel virtual reality (VR) method for anatomy education utilizing a web application designed for virtual dissection. This platform encompasses various features, including virtual dissection, quizzes, a chat forum, and direct messaging, transforming it into a virtual dissection classroom. To evaluate the efficiency of the virtual reality system we compared the web application with other existing web applications using parameters such as Accuracy, Precision, Jaccard index, Dice co-efficient, Processing time, and user rating. From the result obtained from the research in comparing the rendered models from this research with models from other work, the rendered models were able to achieve a Jaccard index of 0.93 and a dice coefficient of 0.94. It also achieved a remarkable average processing time of 15 seconds and a high user rating of 4.5.
- Book Chapter
- 10.1007/978-3-642-16985-4_53
- Jan 1, 2010
The World Wide Web has become an almost limitless source of information. Mashup combines information or functionality from two or more existing Web sources to create a new Web page or application. Unfortunately, it is difficult for users without programming experiences to build mashups with existing Web applications, especially when they want to transfer information between Web applications which are used to build mashup. In this paper, we propose a description-based mashup approach for personal use which allows users without programming experiences to build mashup applications with existing Web applications as well as to transfer information between Web applications. This approach is based on information extraction, information transfer and functionality emulation methods. Our implementation shows that general Web applications can be used to build mashup applications easily without programming.
- Research Article
1
- 10.1016/j.nepr.2025.104302
- Mar 1, 2025
- Nurse education in practice
Effects of 3D virtual cadaver practice on learning motivation, academic achievement and self-efficacy among first-year nursing students.
- Book Chapter
5
- 10.1007/978-3-642-17616-6_52
- Jan 1, 2010
Mashup combines information or functionality from two or more existing Web sources to create a new Web page or application. The Web sources that are used to build mashup applications mainly include Web applications and Web services. The traditional way of building mashup applications is using Web services by writing a script or a program to invoke those Web services. To help the users without programming experience to build flexible mashup applications, we propose a mashup approach of Web applications in this paper. Our approach allows users to build mashup applications with existing Web applications without programming. In addition, with our approach users can transfer information between Web applications to implement consecutive query mashup applications. This approach is based on the information extraction, information transfer and functionality emulation methods. Our implementation shows that general Web applications can also be used to build mashup applications easily, without programming.
- Research Article
4
- 10.1115/1.4054159
- May 24, 2022
- Journal of Mechanical Design
The global pandemic of 2020 caused a paradigm shift in engineering education. In a matter of weeks, and sometimes days, faculty members across the world had to move their hands-on engineering courses to an online environment. During this shift, educators relied on technology more so than ever to improve student design learning without an empirically understanding of the impact of this shift on students' cognition and understanding. The current study was developed to determine the cognitive underpinnings of such shifts by exploring the impact of Augmented Reality (AR) and animation impact engineering student learning, cognitive load, and recall during a virtual product dissection educational activity. This was achieved through a full factorial experiment with 117 first-year engineering students where students were divided into one of four conditions: baseline of virtual dissection; virtual dissection + animation, AR dissection, and AR dissection + animation. The results of the study show that students in the virtual dissection + animation showed an increased understanding of the product over the three other conditions. In addition, participant cognitive load and recall in the AR condition were not significantly different than in a non-AR virtual environment. The results are used to provide recommendations on how technology can be utilized in a virtual classroom environment, providing crucial insight into the steps needed to virtualize engineering education during the pandemic as well as future steps toward possible education reform.
- Research Article
5
- 10.2967/jnumed.123.266018
- Feb 15, 2024
- Journal of nuclear medicine : official publication, Society of Nuclear Medicine
Reliable performance of PET segmentation algorithms on clinically relevant tasks is required for their clinical translation. However, these algorithms are typically evaluated using figures of merit (FoMs) that are not explicitly designed to correlate with clinical task performance. Such FoMs include the Dice similarity coefficient (DSC), the Jaccard similarity coefficient (JSC), and the Hausdorff distance (HD). The objective of this study was to investigate whether evaluating PET segmentation algorithms using these task-agnostic FoMs yields interpretations consistent with evaluation on clinically relevant quantitative tasks. Methods: We conducted a retrospective study to assess the concordance in the evaluation of segmentation algorithms using the DSC, JSC, and HD and on the tasks of estimating the metabolic tumor volume (MTV) and total lesion glycolysis (TLG) of primary tumors from PET images of patients with non-small cell lung cancer. The PET images were collected from the American College of Radiology Imaging Network 6668/Radiation Therapy Oncology Group 0235 multicenter clinical trial data. The study was conducted in 2 contexts: (1) evaluating conventional segmentation algorithms, namely those based on thresholding (SUVmax40% and SUVmax50%), boundary detection (Snakes), and stochastic modeling (Markov random field-Gaussian mixture model); (2) evaluating the impact of network depth and loss function on the performance of a state-of-the-art U-net-based segmentation algorithm. Results: Evaluation of conventional segmentation algorithms based on the DSC, JSC, and HD showed that SUVmax40% significantly outperformed SUVmax50%. However, SUVmax40% yielded lower accuracy on the tasks of estimating MTV and TLG, with a 51% and 54% increase, respectively, in the ensemble normalized bias. Similarly, the Markov random field-Gaussian mixture model significantly outperformed Snakes on the basis of the task-agnostic FoMs but yielded a 24% increased bias in estimated MTV. For the U-net-based algorithm, our evaluation showed that although the network depth did not significantly alter the DSC, JSC, and HD values, a deeper network yielded substantially higher accuracy in the estimated MTV and TLG, with a decreased bias of 91% and 87%, respectively. Additionally, whereas there was no significant difference in the DSC, JSC, and HD values for different loss functions, up to a 73% and 58% difference in the bias of the estimated MTV and TLG, respectively, existed. Conclusion: Evaluation of PET segmentation algorithms using task-agnostic FoMs could yield findings discordant with evaluation on clinically relevant quantitative tasks. This study emphasizes the need for objective task-based evaluation of image segmentation algorithms for quantitative PET.
- Research Article
71
- 10.1097/brs.0b013e3181b79358
- Feb 1, 2010
- Spine
Neck-pain and control group comparative analysis of conventional and virtual reality (VR)-based assessment of cervical range of motion (CROM). To use a tracker-based VR system to compare CROM of individuals suffering from chronic neck pain with CROM of asymptomatic individuals; to compare VR system results with those obtained during conventional assessment; to present the diagnostic value of CROM measures obtained by both assessments; and to demonstrate the effect of a single VR session on CROM. Neck pain is a common musculoskeletal complaint with a reported annual prevalence of 30% to 50%. In the absence of a gold standard for CROM assessment, a variety of assessment devices and methodologies exist. Common to these methodologies, assessment of CROM is carried out by instructing subjects to move their head as far as possible. However, these elicited movements do not necessarily replicate functional movements which occur spontaneously in response to multiple stimuli. To achieve a more functional approach to cervical motion assessment, we have recently developed a VR environment in which electromagnetic tracking is used to monitor cervical motion while participants are involved in a simple yet engaging gaming scenario. CROM measures were collected from 25 symptomatic and 42 asymptomatic individuals using VR and conventional assessments. Analysis of variance was used to determine differences between groups and assessment methods. Logistic regression analysis, using a single predictor, compared the diagnostic ability of both methods. Results obtained by both methods demonstrated significant CROM limitations in the symptomatic group. The VR measures showed greater CROM and sensitivity while conventional measures showed greater specificity. A single session exposure to VR resulted in a significant increase in CROM. Neck pain is significantly associated with reduced CROM as demonstrated by both VR and conventional assessment methods. The VR method provides assessment of functional CROM and can be used for CROM enhancement. Assessment by VR has greater sensitivity than conventional assessment and can be used for the detection of true symptomatic individuals.
- Research Article
11
- 10.1038/s41433-022-02055-w
- Apr 18, 2022
- Eye
To develop and validate an end-to-end region-based deep convolutional neural network (R-DCNN) to jointly segment the optic disc (OD) and optic cup (OC) in retinal fundus images for precise cup-to-disc ratio (CDR) measurement and glaucoma screening. In total, 2440 retinal fundus images were retrospectively obtained from 2033 participants. An R-DCNN was presented for joint OD and OC segmentation, where the OD and OC segmentation problems were formulated into object detection problems. We compared R-DCNN's segmentation performance on our in-house dataset with that of four ophthalmologists while performing quantitative, qualitative and generalization analyses on the publicly available both DRISHIT-GS and RIM-ONE v3 datasets. The Dice similarity coefficient (DC), Jaccard coefficient (JC), overlapping error (E), sensitivity (SE), specificity (SP) and area under the curve (AUC) were measured. On our in-house dataset, the proposed model achieved a 98.51% DC and a 97.07% JC for OD segmentation, and a 97.63% DC and a 95.39% JC for OC segmentation, achieving a performance level comparable to that of the ophthalmologists. On the DRISHTI-GS dataset, our approach achieved 97.23% and 94.17% results in DC and JC results for OD segmentation, respectively, while it achieved a 94.56% DC and an 89.92% JC for OC segmentation. Additionally, on the RIM-ONE v3 dataset, our model generated DC and JC values of 96.89% and 91.32% on the OD segmentation task, respectively, whereas the DC and JC values acquired for OC segmentation were 88.94% and 78.21%, respectively. The proposed approach achieved very encouraging performance on the OD and OC segmentation tasks, as well as in glaucoma screening. It has the potential to serve as a useful tool for computer-assisted glaucoma screening.
- Research Article
25
- 10.1007/s11760-019-01599-x
- Nov 25, 2019
- Signal, Image and Video Processing
Osteosarcoma is a primary malignant bone tumor in children and adolescents with significant morbidity and poor prognosis. Diffusion weighted imaging (DWI) plays a crucial role in diagnosis and prognosis of this malignant disease by capturing cellular changes in tumor tissue early in the course of treatment without any contrast injection. Segmentation of tumor in DWI is challenging due to low signal-to-noise ratio, partial-volume effects, intensity inhomogeneities and irregular shape of osteosarcoma. The purpose of this study was to segment osteosarcoma solely utilizing DWI and identify effective and robust technique(s) for tumor segmentation. DWI dataset of fifty-five (N = 55; male:female = 41:14; Age = 17.8 ± 7.4 years) patients with osteosarcoma was acquired before treatment. Total nine automated and semi-automated segmentation algorithms based on (1) Otsu thresholding (OT), (2) Otsu threshold-based region growing (OT-RG), (3) Active contour (AC), (4) Simple linear iterative clustering Superpixels (SLIC-S), (5) Fuzzy c-means clustering (FCM), (6) Graph cut (GC), (7) Logistic regression (LR) (8) Linear support vector machines (L-SVM) and (9) Deep feed-forward neural network (DNN) were implemented. Segmentation accuracy was estimated by Dice coefficient (DC), Jaccard Index (JI), precision (P) and recall (R) using manually demarcated ground-truth tumor mask by a radiologist. Evaluated apparent diffusion coefficient (ADC) in segmented tumor mask and ground-truth tumor mask was compared using paired t test for statistical significance (p < 0.05) and Pearson correlation coefficient (PCC). Automated SLIC-S and FCM showed quantitatively and qualitatively superior segmentation with DC: ~ 79–82%; JI: ~ 67–71%; P: ~ 81–83%; R: ~ 80–86% and PCC = 0.89, 0.88 among all methods. Among semi-automated methods, AC was quantitatively more accurate (DC: ~ 77%; JI: ~ 65%; P: ~ 72%; R: ~ 88%; PCC = 0.85) than OT-RG and GC (DC: ~ 74–75%; JI: ~ 60–61%; P: ~ 67–72%; R: ~ 84–89%; PCC = 0.78, 0.73). Among machine learning algorithms, DNN showed the highest accuracy (DC: ~ 73%; JI: ~ 62%; P: ~ 77%; R: ~ 86%; PCC = 0.79) than LR and L-SVM (DC: ~ 70–71%; JI: ~ 58–63%; P: ~ 73%; R: ~ 74–85%; PCC = 0.69, 0.71). Execution times were instantaneous for SLIC-S, FCM and machine learning methods, while OT-RG, AC and GC took comparable ~ 1–6 s/slice image. Automated SLIC-S, FCM and semi-automated AC methods produced promising tumor segmentation results using DWI of the osteosarcoma dataset.
- Research Article
2
- 10.2196/69021
- Mar 20, 2025
- JMIR Serious Games
BackgroundProper donning and doffing of personal protection equipment (PPE) and hand hygiene in the correct spatial context of a health facility is important for the prevention and control of nosocomial infections. On-site training is difficult due to the potential infectious risks and shortages of PPE, whereas video-based training lacks immersion which is vital for the familiarization of the environment. Virtual reality (VR) training can support the repeated practice of PPE donning and doffing in an immersive environment that simulates a realistic configuration of a health facility.ObjectiveThis study aims to develop and evaluate a VR simulation focusing on the correct event order of PPE donning and doffing, that is, the item and hand hygiene order in the donning and doffing process but not the detailed steps of how to don and doff an item, in an immersive environment that replicates the spatial zoning of a hospital. The VR method should be generic and support customizable sequencing of PPE donning and doffing.MethodsAn immersive VR PPE training tool was developed by computer scientists and medical experts. The effectiveness of the immersive VR method versus video-based learning was tested in a pilot study as a randomized controlled trial (N=32: VR group, n=16; video-based training, n=16) using questionnaires on spatial-aware event order memorization questions, usability, and task workload. Trajectories of participants in the immersive environment were also recorded for behavior analysis and potential improvements of the real environment of the health facility.ResultsComparable sequence memorization scores (VR mean 79.38, SD 12.90 vs video mean 74.38, SD 17.88; P=.37) as well as National Aeronautics and Space Administration Task Load Index scores (VR mean 42.9, SD 13.01 vs video mean 51.50, SD 20.44; P=.16) were observed. The VR group had an above-average usability in the System Usability Scale (mean 74.78>70.0) and was significantly better than the video group (VR mean 74.78, SD 13.58 vs video mean 57.73, SD 21.13; P=.009). The analysis and visualization of trajectories revealed a positive correlation between the length of trajectories and the completion time, but neither correlated to the accuracy of the memorization task. Further user feedback indicated a preference for the VR method over the video-based method. Limitations of and suggestions for improvements in the study were also identified.ConclusionsA new immersive VR PPE training method was developed and evaluated against the video-based training. Results of the pilot study indicate that the VR method provides training quality comparable to video-based training and is more usable. In addition, the immersive experience of realistic settings and the flexibility of training configurations make the VR method a promising alternative to video instructions.
- Research Article
4
- 10.4238/2015.august.19.16
- Jan 1, 2015
- Genetics and molecular research : GMR
In this study, we analyzed dominant molecular markers to estimate the genetic divergence of 26 popcorn genotypes and evaluate whether using various dissimilarity coefficients with these dominant markers influences the results of cluster analysis. Fifteen random amplification of polymorphic DNA primers produced 157 amplified fragments, of which 65 were monomorphic and 92 were polymorphic. To calculate the genetic distances among the 26 genotypes, the complements of the Jaccard, Dice, and Rogers and Tanimoto similarity coefficients were used. A matrix of Dij values (dissimilarity matrix) was constructed, from which the genetic distances among genotypes were represented in a more simplified manner as a dendrogram generated using the unweighted pair-group method with arithmetic average. Clusters determined by molecular analysis generally did not group material from the same parental origin together. The largest genetic distance was between varieties 17 (UNB-2) and 18 (PA-091). In the identification of genotypes with the smallest genetic distance, the 3 coefficients showed no agreement. The 3 dissimilarity coefficients showed no major differences among their grouping patterns because agreement in determining the genotypes with large, medium, and small genetic distances was high. The largest genetic distances were observed for the Rogers and Tanimoto dissimilarity coefficient (0.74), followed by the Jaccard coefficient (0.65) and the Dice coefficient (0.48). The 3 coefficients showed similar estimations for the cophenetic correlation coefficient. Correlations among the matrices generated using the 3 coefficients were positive and had high magnitudes, reflecting strong agreement among the results obtained using the 3 evaluated dissimilarity coefficients.
- Research Article
2
- 10.18502/fbt.v8i1.5858
- Mar 30, 2021
- Frontiers in Biomedical Technologies
Purpose: Glioma tumor segmentation is an essential step in clinical decision making. Recently, computer-aided methods have been widely used for rapid and accurate delineation of the tumor regions. Methods based on image feature extraction can be used as fast methods, while segmentation based on the physiology and pharmacokinetic of the tissues is more accurate. This study aims to compare the performance of tumor segmentation based on these two different methods. Materials and Methods: Nested Model Selection (NMS) based on Extended-Toft’s model was applied to 190 Dynamic Contrast-Enhanced MRI (DCE-MRI) slices acquired from 25 Glioblastoma Multiforme (GBM) patients in 70 time-points. A model with three pharmacokinetic parameters, Model 3, is usually assigned to tumor voxel based on the time-contrast concentration signal. We utilized Deep-Net as a CNN network, based on Deeplabv3+ and layers of pre-trained resnet18, which has been trained with 17288 T1-Contrast MRI slices with HGG brain tumor to predict the tumor region in our 190 DCE MRI T1 images. The NMS-based physiological tumor segmentation was considered as a reference to compare the results of tumor segmentation by Deep-Net. Dice, Jaccard, and overlay similarity coefficients were used to evaluate the tumor segmentation accuracy and reliability of the Deep tumor segmentation method. Results: The results showed a relatively high similarity coefficient (Dice coefficient: 0.73±0.15, Jaccard coefficient: 0.66±0.17, and overlay coefficient: 0.71±0.15) between deep learning tumor segmentation and the tumor region identified by the NMS method. The results indicate that the deep learning methods may be used as accurate and robust tumor segmentation. Conclusion: Deep learning-based segmentation can play a significant role to increase the segmentation accuracy in clinical application, if their training process is completely automatic and independent from human error.
- Conference Article
1
- 10.1109/cloudcom-asia.2013.98
- Dec 1, 2013
Most of the enterprises are using web application for communicating with their customers, partners, shareholders and others. Additionally the web applications are used to carry out commercial activities and business transactions. As web applications are extensively used, its operating environment plays an important role which determines its efficiency. Nowadays cloud has proved to be one of the best operating environments for deploying web applications because of its features like automatic load balancing, scalability, maintenance and cost. As the demand is increasing for cloud computing, many organizations are looking for cloud based software to reduce their deployment cost and server maintenance overhead. The existing web application like individual/corporate websites, CRM/ERP applications, E-Publishing, E-Government, E-Commerce and E-Learning are slowly migrating to cloud. On the other hand managing the content of such web applications is a tedious task. The Content Management Systems (CMS) has already proved to be a good choice for developing web applications which ensures rapid application development and ease of use. CMS allows even a non-technical user to create, edit, manage and publish the content easily. When CMS is coupled with cloud the resultant application is highly efficient and easily manageable. In this paper we have proposed a new technique to develop cloud based software using content management systems.
- Book Chapter
- 10.4018/978-1-59904-762-1.ch010
- Jan 1, 2008
Web applications, which are computer programs ported to the Web, allow end-users to use various remote services and tools through their Web browsers. There are an enormous number of Web applications on the Web, and they are becoming the basic infrastructure of everyday life. In spite of the remarkable development of Web-based infrastructure, it is still difficult for end-users to compose new integrated tools of both existing Web applications and legacy local applications, such as spreadsheets, chart tools, and database. In this chapter, the authors propose a new framework where end-users can wrap remote Web applications into visual components, called pads, and functionally combine them together through drag-and-drop operations. The authors use, as the basis, a meme media architecture IntelligentPad that was proposed by the second author. In the IntelligentPad architecture, each visual component, called a pad, has slots as data I/O ports. By pasting a pad onto another pad, users can integrate their functionalities. The framework presented in this chapter allows users to visually create a wrapper pad for any Web application by defining HTML nodes within the Web application to work as slots. Examples of such a node include input-forms and text strings on Web pages. Users can directly manipulate both wrapped Web applications and wrapped local legacy tools on their desktop screen to define application linkages among them. Since no programming expertise is required to wrap Web applications or to functionally combine them together, end-users can build new integrated tools of both wrapped Web applications and local legacy applications.
- Conference Article
49
- 10.1145/900051.900092
- Aug 26, 2003
HTML-based interface technologies enable end-users to easily use various remote Web applications. However, it is difficult for end-users to compose new integrated tools of both existing Web applications and legacy local applications such as spreadsheets, chart tools and database. In this paper, the authors propose a new framework where end-users can wrap remote Web applications into visual components called pads, and functionally combine them together through drag & drop-paste operations. The authors use, as the basis, a meme media architecture IntelligentPad that was proposed by the second author. In the IntelligentPad architecture, each visual component called a pad has slots as data I/O ports. By pasting a pad onto another pad users can integrate their functionalities. The framework presented in this paper allows users to visually create a wrapper pad for any Web application by defining HTML nodes within the Web application to work as slots. Examples of such a node include input-forms and text strings on Web pages. Users can directly manipulate both wrapped Web applications and wrapped local legacy tools on their desktop screen to define application linkages among them. Since no programming expertise is required to wrap Web applications or to functionally combine them together, end-users can build new integrated tools of both wrapped Web applications and local legacy applications.
- Research Article
1
- 10.18517/ijaseit.13.1.16028
- Feb 26, 2023
- International Journal on Advanced Science, Engineering and Information Technology
The main goal of this study is to develop a mobile Virtual Reality (VR) application to conduct basic Python coding skills for university students who are struggling to learn to code. This study employs a quasi-experimental method to examine the difference in the efficiency of VR and traditional learning methods by evaluating the students' performance. Thirty students between 18 to 22 years old participated. The participants were divided into two control groups, in which one group used the conventional python learning method while another implemented the VR application in python learning. Unity 3D was used as the application development tool with Mobile Application Development Lifecycle (MADLC). The developed VR application was employed using Google cardboard to create an immersive VR experience. Usability tests, hypothesis tests, Presence Questionnaires (PQ) and system usability scale (SUS) are used as evaluation tools. Findings illustrated how learning through VR has yielded better performance than the conventional learning method. In hypothesis testing, the VR learning method suggested more effective learning with t_statistic value of 4.992, a more considerable value than t_critical=2.76. 73% of the participants rated above 68 out of 100, which indicated high levels of satisfaction with the use of the mobile VR application to learn Python. In short, the VR method is perceived to be useful and convenient to help students learn at any place and time.
- Ask R Discovery
- Chat PDF
AI summaries and top papers from 250M+ research sources.