XAIUI: User Belief-Driven Explainable AI for Context-Aware Adaptive Interfaces
Explainable AI (XAI) offers solutions to the challenges of predictability and interpretability in adaptive interfaces, particularly in Augmented Reality (AR) systems that dynamically adapt information based on situational contexts. While traditional XAI methods highlight contextual factors influencing adaptations, they often overlook the user's internal understanding, such as their expertise and contextual perceptions. This omission can result in explanations that feel redundant or obvious. We present XAIUI, a computational approach that generates tailored explanations by integrating the system's adaptation model with a bayesian model of the user's internal representation. Two online studies evaluated XAIUI. In the first study (N=77), participants ranked XAIUI 's explanations as most preferred compared to four ablations ( \(\chi^{2}(4)=62.28,p<.001\) ). In the second study (N=110), XAIUI 's explanations were rated significantly less complex ( \(\chi^{2}(4)=840.855,p<.001\) ) than all ablations, except showing no explanation. Our results demonstrate XAIUI 's ability to deliver user-centric, concise, and intuitive explanations, highlighting its potential to enhance AI-driven interfaces.
- Research Article
5
- 10.1016/j.jmp.2006.06.006
- Aug 14, 2006
- Journal of Mathematical Psychology
On additivity of duration reproduction functions
- Conference Article
6
- 10.23919/ilrn47897.2020.9155113
- Jun 1, 2020
Sensory-perceptual difficulties are a common characteristic in Autism Spectrum Disorders (ASD). Studies in children with ASD describe an array of challenging behaviors regarding sensory simulation. Parents, special educators, and therapists are often witnessing these behaviors throughout the day. This study aimed to use an Augmented Reality (AR) system for the simulation of the sensory overload that children with ASD experience. A total of seventy (N=70) parents, special educators, and therapists of children with ASD responded to researchers’ invitation for this study. Researchers held individual sessions with each participant who wore a head-mounted AR device (Magic Leap OneTM). Six visual and two auditory stimuli were individually and consecutively administered via the AR device. Participants experienced autism-like sensory overload under controlled conditions. Their acceptance and experience of the AR device were measured with the use of three online questionnaires (Temple Presence Inventory, TPI; Simulator Sickness Questionnaire, SSQ; Technology Acceptance Model, TAM). An open-ended question was also administered to measure the overall AR experience. In regard to the study’s findings for participants’ experience, the results from the TPI (Cronbach's Alpha = .84) suggested that the AR system offered a convincing blended environment. Also, low scores in SSQ indicated that the use of the AR system was comfortable. The results from the TAM showed in their majority high internal consistency (> .70) and high mean scores. This indicated that the participants accepted the AR system. In the open-ended question, participants reported overall satisfaction from their experience with the AR system. The study’s findings suggested that the AR device enabled participants to experience a sensory overload similar to the one child with ASD report. Participants’ experience was deemed to be convincing, comfortable, and user-friendly. The results of this study are encouraging and highlight the potential of AR in autism research. Future studies are needed to incorporate richer and more interactive AR simulations for authentic real-life experiences.
- Research Article
33
- 10.1176/appi.neuropsych.21030067
- Jul 1, 2021
- The Journal of neuropsychiatry and clinical neurosciences
Extended-Reality Technologies: An Overview of Emerging Applications in Medical Education and Clinical Care.
- Conference Article
3
- 10.1117/12.2512250
- Mar 8, 2019
In transperineal prostate biopsy or ablation, a grid-template is typically used to guide the needle. The guidance method has limited positioning resolution and lack of needle angulation selections that are referenced to ultrasound imaging or TRUS-MRI fusion targets. To overcome the limitation, a novel augmented reality (AR) system that use smart see-through glasses and smartphone as a needle guidance device for transperineal prostate procedure was developed. The AR system is comprised of a MRI/CT scanner, a pre-procedural image analysis and visualization software, AR devices (smart-glasses, smartphone), a newly-developed AR app, as well as a local network. The AR app displays the lesion and planned needle trajectory, which are derived from the pre-procedural images, on the AR devices. A special designed image marker frame that affixed to the patient’s perineum was used to track the pre-procedural image with the AR devices. The displayed needle plan was always referenced to the patient and remains independent from the position and orientation of the devices. Multiple devices can be used simultaneously and communicate via a local network. We evaluated the AR system accuracy with iPhone and R-7 glasses in a phantom study. The image overlay accuracy was 0.58±0.43o and 1.62±1.52o in iPhone and R-7 glasses respectively. The accuracy of iPhone guidance was 1.9±0.97 mm (lateral) and 1.0±0.5 mm (in-direction), the accuracy of R-7 guidance was 2.8±1.4mm (lateral) and 2.3±1.5mm (indirection). AR system using smart-glasses and smartphone can provide accurate needle guidance and see-through-the-skin display for needle based transperineal prostate interventions like biopsy and ablation.
- Research Article
36
- 10.1007/s11999.0000000000000233
- Feb 24, 2018
- Clinical Orthopaedics & Related Research
Application of surgical navigation for pelvic bone cancer surgery may prove useful, but in addition to the fact that research supporting its adoption remains relatively preliminary, the actual navigation devices are physically large, occupying considerable space in already crowded operating rooms. To address this issue, we developed and tested a navigation system for pelvic bone cancer surgery assimilating augmented reality (AR) technology to simplify the system by embedding the navigation software into a tablet personal computer (PC). Using simulated tumors and resections in a pig pelvic model, we asked: Can AR-assisted resection reduce errors in terms of planned bone cuts and improve ability to achieve the planned margin around a tumor in pelvic bone cancer surgery? We developed an AR-based navigation system for pelvic bone tumor surgery, which could be operated on a tablet PC. We created 36 bone tumor models for simulation of tumor resection in pig pelves and assigned 18 each to the AR-assisted resection group and conventional resection group. To simulate a bone tumor, bone cement was inserted into the acetabular dome of the pig pelvis. Tumor resection was simulated in two scenarios. The first was AR-assisted resection by an orthopaedic resident and the second was resection using conventional methods by an orthopaedic oncologist. For both groups, resection was planned with a 1-cm safety margin around the bone cement. Resection margins were evaluated by an independent orthopaedic surgeon who was blinded as to the type of resection. All specimens were sectioned twice: first through a plane parallel to the medial wall of the acetabulum and second through a plane perpendicular to the first. The distance from the resection margin to the bone cement was measured at four different locations for each plane. The largest of the four errors on a plane was adopted for evaluation. Therefore, each specimen had two values of error, which were collected from two perpendicular planes. The resection errors were classified into four grades: ≤ 3 mm; 3 to 6 mm; 6 to 9 mm; and > 9 mm or any tumor violation. Student's t-test was used for statistical comparison of the mean resection errors of the two groups. The mean of 36 resection errors of 18 pelves in the AR-assisted resection group was 1.59 mm (SD, 4.13 mm; 95% confidence interval [CI], 0.24-2.94 mm) and the mean error of the conventional resection group was 4.55 mm (SD, 9.7 mm; 95% CI, 1.38-7.72 mm; p < 0.001). All specimens in the AR-assisted resection group had errors < 6 mm, whereas 78% (28 of 36) of errors in the conventional group were < 6 mm. In this in vitro simulated tumor model, we demonstrated that AR assistance could help to achieve the planned margin. Our model was designed as a proof of concept; although our findings do not justify a clinical trial in humans, they do support continued investigation of this system in a live animal model, which will be our next experiment. The AR-based navigation system provides additional information of the tumor extent and may help surgeons during pelvic bone cancer surgery without the need for more complex and cumbersome conventional navigation systems.
- Dissertation
- 10.25394/pgs.12234899.v1
- May 5, 2020
Augmented Reality (AR) is a powerful computer to human visual interface that displays data overlaid onto the user's view of the real world. Compared to conventional visualization on a computer display, AR has the advantage of saving the user the cognitive effort of mapping the visualization to the real world. For example, a user wearing AR glasses can find a destination in an urban setting by following a virtual green line drawn by the AR system on the sidewalk, which is easier to do than having to rely on navigational directions displayed on a phone. Similarly, a surgeon looking at an operating field through an AR display can see graphical annotations authored by a remote mentor as if the mentor actually drew on the patient's body. However, several challenges remain to be addressed before AR can reach its full potential. This research contributes solutions to four such challenges. A first challenge is achieving visualization continuity for AR displays. Since truly transparent displays are not feasible, AR relies on simulating transparency by showing a live video on a conventional display. For correct transparency, the display should show exactly what the user would see if the display were not there. Since the video is not captured from the user viewpoint, simply displaying each frame as acquired results in visualization discontinuity and redundancy. A second challenge is providing the remote mentor with an effective visualization of the mentee's workspace in AR telementoring. Acquiring the workspace with a camera built into the mentee's AR headset is appealing since it captures the workspace from the mentee's viewpoint, and since it does not require external hardware. However, the workspace visualization is unstable as it changes frequently, abruptly, and substantially with each mentee head motion. A third challenge is occluder removal in diminished reality. Whereas in conventional AR the user's visualization of a real world scene is augmented with graphical annotations, diminished reality aims to aid the user's understanding of complex real world scenes by removing objects from the visualization. The challenge is to paint over occluder pixels using auxiliary videos acquired from different viewpoints, in real time, and with good visual quality. A fourth challenge is to acquire scene geometry from the user viewpoint, as needed in AR, for example, to integrate virtual annotations seamlessly into the real world scene through accurate depth compositing, and shadow and reflection casting and receiving. Our solutions are based on the thesis that images acquired from different viewpoints should not always be connected by computing a dense, per-pixel set of correspondences, but rather by devising custom, lightweight, yet sufficient connections between them, for each unique context. We have developed a self-contained phone-based AR display that aligns the phone camera and the user by views, reducing visualization discontinuity to less than 5% for scene distances beyond 5m. We have developed and validated in user studies an effective workspace visualization method by stabilizing the mentee first-person video feed through reprojection on a planar proxy of the workspace. We have developed a real-time occluder in-painting method for diminished reality based on a two-stage coarse-then-fine mapping between the user and the auxiliary view. The mapping is established in time linear with occluder contour length, and it achieves good continuity across the occluder boundary. We have developed a method for 3D scene acquisition from the user viewpoint based on single-image triangulation of correspondences between left and right eye corneal reflections. The method relies on a subpixel accurate calibration of the catadioptric imaging system defined by two corneas and a camera, which enables the extension of conventional epipolar geometry for a fast connection between corneal reflections.
- Book Chapter
1
- 10.1007/978-3-319-43982-2_6
- Aug 17, 2016
A statistical machine translation (SMT) capability would be very useful in augmented reality (AR) systems. For example, translating and displaying text in a smart phone camera image would be useful to a traveler needing to read signs and restaurant menus, or reading medical documents when a medical problem arises when visiting a foreign country. Such system would also be useful for foreign students to translate lectures in real time on their mobile devices. However, SMT quality has been neglected in AR systems research, which has focused on other aspects, such as image processing, optical character recognition (OCR), distributed architectures, and user interaction. In addition, general-purpose translation services, such as Google Translate, used in some AR systems are not well-tuned to produce high-quality translations in specific domains and are Internet connection dependent. This research devised SMT methods and evaluated their performance for potential use in AR systems. We give particular attention to domain-adapted SMT systems, in which an SMT capability is tuned to a particular domain of text to increase translation quality. We focus on translation between the Polish and English languages, which presents a number of challenges due to fundamental linguistic differences. However, the SMT systems used are readily extensible to other language pairs. SMT techniques are applied to two domains in translation experiments: European Medicines Agency (EMEA) medical leaflets and the Technology, Entertainment, Design (TED) lectures. In addition, field experiments are conducted on random samples of Polish text found in city signs, posters, restaurant menus, lectures on biology and computer science, and medical leaflets. Texts from these domains are translated by a number of SMT system variants, and the systems’ performance is evaluated by standard translation performance metrics and compared. The results appear very promising and encourage future applications of SMT to AR systems.
- Research Article
4
- 10.18857/jkpt.2019.31.3.141
- Jun 30, 2019
- The Journal of Korean Physical Therapy
Purpose: To investigate the effect of an augmented reality (AR) system on muscle strength and function level of the paretic lower limb and the balance ability in the early rehabilitation program of acute stroke patients. Methods: The participants (30 or fewer days after stroke) were randomly assigned to receive intervention with an early rehabilitation program using an AR system (n=1) or an early rehabilitation program consisting of functional electrical stimulation and tilt table use (n=1). Patients in both subjects received interventions 4-5 times a week for 3 weeks. Results: In the paretic limb muscle strength, AR subject was increased from 15 to 39.6 Nm and Control subject was increased from 5 to 30.2 Nm. The paretic limb function of AR subject motor function was increased from 8 to 28 score and Control subject motor function was increased from 6 to 14 score. But sensory function was very little difference between the two subjects (AR subject: from 4 to 10 score, Control subject: from 3 to 10 score). In the balance ability, AR subject had more difference after intervention than control subject (AR subject: 33 score, Control subject: 22 score). Conclusion: The early rehabilitation program using the AR system showed a slightly higher improvement in the motor function of the paretic lower limb and balance ability measurement than the general early rehabilitation program. The AR system, which can provide more active, task-oriented, and motivational environment, may provide a meaningful environment for the initial rehabilitation process after stroke.
- Research Article
1
- 10.1504/ijcps.2018.10014237
- Jan 1, 2018
- International Journal of Cognitive Performance Support
Traditional teaching models can often only provide a one-way transmission of knowledge in real-world situations. These methods are rarely effective, as students require much more interaction with the instructor to gain and retain knowledge from the curriculum. For these reasons, this study supports the use of an interactive augmented reality (AR) system that combines AR and QR code technologies for teaching purposes. The AR system includes two subsystems: the mobile AR system and the AR materials remote server. Users can easily build their own learning environments and include information and materials relevant to their needs. We hope that this study will promote the use of AR systems for educational purposes and provide students with virtual content linked to real-world objects to create an interactive method of learning new information. Our final goal is to encourage the widespread use of AR technology.
- Research Article
1172
- 10.1007/s11042-010-0660-6
- Dec 14, 2010
- Multimedia Tools and Applications
This paper surveys the current state-of-the-art of technology, systems and applications in Augmented Reality. It describes work performed by many different research groups, the purpose behind each new Augmented Reality system, and the difficulties and problems encountered when building some Augmented Reality applications. It surveys mobile augmented reality systems challenges and requirements for successful mobile systems. This paper summarizes the current applications of Augmented Reality and speculates on future applications and where current research will lead Augmented Reality's development. Challenges augmented reality is facing in each of these applications to go from the laboratories to the industry, as well as the future challenges we can forecast are also discussed in this paper. Section 1 gives an introduction to what Augmented Reality is and the motivations for developing this technology. Section 2 discusses Augmented Reality Technologies with computer vision methods, AR devices, interfaces and systems, and visualization tools. The mobile and wireless systems for Augmented Reality are discussed in Section 3. Four classes of current applications that have been explored are described in Section 4. These applications were chosen as they are the most famous type of applications encountered when researching AR apps. The future of augmented reality and the challenges they will be facing are discussed in Section 5.
- Research Article
1
- 10.1386/jpm_00007_1
- Aug 1, 2023
- Journal of Pervasive Media
The use of augmented reality (AR) in literature has predominantly focused on providing instructional tools for educators and learners, emphasizing its capacity of enhancing engagement and supporting immersive learning. Aside from its central literacy purpose, AR in children’s literature appears in the form of digital pop-up books or as add-on games, which are usually located outside of the main narrative. In this sense, AR risks disrupting the flow of a story instead of enriching the continuous imaginative space of the narrative. The Dragon Defenders series, by James Russell, uses AR as a narrative tool in ways previously unseen in children’s literature. Russell puts the reader into the shoes of his main characters, Paddy and Flynn, using AR to show the reader what the boys themselves see. There are moments in these books where the AR system and traditional narrative system fully overlap, integrate and enhance the narrative dimension of the story. While readers navigate through the book by turning physical pages, the addition of AR at specific moments in the story not only provides a more immersive reading experience but, in this case, advances the narrative to great effect. This case study asks how AR can be used to enhance the narrative structure and flow in a text-based novel, using formal gameplay analysis to examine how the AR and narrative systems interact and identifying which examples of this interaction work to the best effect. It analyses the AR moments in the context of the book, deconstructs the unusual first-person perspective from the protagonist’s point of view and explores how these AR experiences help drive and truly augment the core narrative. The larger context of this study seeks to emancipate AR from its predominantly technological ontology and contribute to the development of AR as a genuine narrative device in fiction storytelling.
- Conference Article
9
- 10.1109/sitis.2013.69
- Dec 1, 2013
In this paper, we propose an augmented reality (AR) system with a laser projection device as a hands-on display at a science museum. The AR system provides virtual information, which learners can control for a visual explanation about an exhibited item. Learners develop their knowledge and understanding through the display without any modification to the item and/or the existing displayed explanation. We conducted an experiment using the AR system, with child visitors to Gamagori Museum of Earth, Life and the Sea. AR systems should meet the following criteria if they are to be considered effective: i) the display should make learners interested in the exhibited item, ii) learners should be able to easily handle the AR display, and iii) learners should construct their knowledge through using the display. In our experiment, we evaluated the AR display against these criteria. We conclude that the AR display system enables learners to construct their knowledge in reality, and that the system encourages learners' interest in the exhibited items.
- Research Article
17
- 10.3389/fonc.2021.723509
- Nov 1, 2021
- Frontiers in Oncology
ObjectiveTo report the first use of a novel projected augmented reality (AR) system in open sinonasal tumor resections in preclinical models and to compare the AR approach with an advanced intraoperative navigation (IN) system.MethodsFour tumor models were created. Five head and neck surgeons participated in the study performing virtual osteotomies. Unguided, AR, IN, and AR + IN simulations were performed. Statistical comparisons between approaches were obtained. Intratumoral cut rate was the main outcome. The groups were also compared in terms of percentage of intratumoral, close, adequate, and excessive distances from the tumor. Information on a wearable gaze tracker headset and NASA Task Load Index questionnaire results were analyzed as well.ResultsA total of 335 cuts were simulated. Intratumoral cuts were observed in 20.7%, 9.4%, 1.2,% and 0% of the unguided, AR, IN, and AR + IN simulations, respectively (p < 0.0001). The AR was superior than the unguided approach in univariate and multivariate models. The percentage of time looking at the screen during the procedures was 55.5% for the unguided approaches and 0%, 78.5%, and 61.8% in AR, IN, and AR + IN, respectively (p < 0.001). The combined approach significantly reduced the screen time compared with the IN procedure alone.ConclusionWe reported the use of a novel AR system for oncological resections in open sinonasal approaches, with improved margin delineation compared with unguided techniques. AR improved the gaze-toggling drawback of IN. Further refinements of the AR system are needed before translating our experience to clinical practice.
- Research Article
25
- 10.1155/2018/8194726
- Apr 18, 2018
- Advances in Multimedia
As smartphones, tablet computers, and other mobile devices have continued to dominate our digital world ecosystem, there are many industries using mobile or wearable devices to perform Augmented Reality (AR) functions in their workplaces in order to increase productivity and decrease unnecessary workloads. Mobile-based AR can basically be divided into three main types: phone-based AR, wearable AR, and projector-based AR. Among these, projector-based AR or Spatial Augmented Reality (SAR) is the most immature and least recognized type of AR for end users. This is because there are a small number of commercial products providing projector-based AR functionalities in a mobile manner. Also, prices of mobile projectors are still relatively high. Moreover, there are still many technical problems regarding projector-based AR that have been left unsolved. Nevertheless, it is projector-based AR that has potential to solve a fundamental problem shared by most mobile-based AR systems. Also the always-visible nature of projector-based AR is one good answer for solving current user experience issues of phone-based AR and wearable AR systems. Hence, in this paper, we analyze what are the user experience issues and technical issues regarding common mobile-based AR systems, recently widespread phone-based AR systems, and rising wearable AR systems. Then for each issue, we propose and explain a new solution of how using projector-based AR can solve the problems and/or help enhance its user experiences. Our proposed framework includes hardware designs and architectures as well as a software computing paradigm towards mobile projector-based AR systems. The proposed design is evaluated by three experts using qualitative and semiquantitative research approaches.
- Research Article
1
- 10.1167/jov.23.15.15
- Dec 1, 2023
- Journal of Vision
Augmented reality (AR) systems make it possible to present visual stimuli that intermix and interact with people's view of the natural world. But building an AR system that merges stimuli with our natural visual experience is hard. AR systems often suffer from technical and visual limitations, such as small eyeboxes, limited brightness, and narrow visual field coverage. An integral part of AR system development, therefore, is perceptual research that improves our understanding of when and why these limitations matter. I will describe a suite of perceptual studies designed to provide guidance for engineers on the visibility and appearance of wearable optical see-through AR displays. Our results highlight the idiosyncrasies of how our visual system integrates information from the two eyes, the multifaceted nature of perceptual phenomena in AR, and the trade-offs that are currently necessary to create an AR system that is both wearable and compelling.
- Ask R Discovery
- Chat PDF
AI summaries and top papers from 250M+ research sources.