Augmented Reality With Dynamic Anatomy Modelling for Knee Arthroscopy

  • Abstract
  • Literature Map
  • Similar Papers
Abstract
Translate article icon Translate Article Star icon
Take notes icon Take Notes

ABSTRACTResearch on augmented reality (AR) for knee arthroscopy has not adequately focused on knee flexion during surgery. To overcome major AR errors caused by knee movement, this study presents an association model between the finite‐element models of the knee surface and bones to enable dynamic anatomy modelling. The association model allows the displacement of the knee surface elements and the reaction force of the bone elements to interact with each other. During knee flexion, the real‐time shape of the knee is captured with a colour and depth camera, and the association model deforms accordingly from the extension to the flexion state. The proposed model was evaluated using computed tomography data from the knees of six participants. The results showed that the association model successfully compensates for the movement of the femur and tibia within an error margin of only 3.85 mm around the drilling area. The proposed model could therefore enable effective AR‐based surgical navigation during knee surgeries.

Similar Papers
  • Research Article
  • Cite Count Icon 36
  • 10.1007/s11999.0000000000000233
Can Augmented Reality Be Helpful in Pelvic Bone Cancer Surgery? An In Vitro Study.
  • Feb 24, 2018
  • Clinical Orthopaedics & Related Research
  • Hwan Seong Cho + 6 more

Application of surgical navigation for pelvic bone cancer surgery may prove useful, but in addition to the fact that research supporting its adoption remains relatively preliminary, the actual navigation devices are physically large, occupying considerable space in already crowded operating rooms. To address this issue, we developed and tested a navigation system for pelvic bone cancer surgery assimilating augmented reality (AR) technology to simplify the system by embedding the navigation software into a tablet personal computer (PC). Using simulated tumors and resections in a pig pelvic model, we asked: Can AR-assisted resection reduce errors in terms of planned bone cuts and improve ability to achieve the planned margin around a tumor in pelvic bone cancer surgery? We developed an AR-based navigation system for pelvic bone tumor surgery, which could be operated on a tablet PC. We created 36 bone tumor models for simulation of tumor resection in pig pelves and assigned 18 each to the AR-assisted resection group and conventional resection group. To simulate a bone tumor, bone cement was inserted into the acetabular dome of the pig pelvis. Tumor resection was simulated in two scenarios. The first was AR-assisted resection by an orthopaedic resident and the second was resection using conventional methods by an orthopaedic oncologist. For both groups, resection was planned with a 1-cm safety margin around the bone cement. Resection margins were evaluated by an independent orthopaedic surgeon who was blinded as to the type of resection. All specimens were sectioned twice: first through a plane parallel to the medial wall of the acetabulum and second through a plane perpendicular to the first. The distance from the resection margin to the bone cement was measured at four different locations for each plane. The largest of the four errors on a plane was adopted for evaluation. Therefore, each specimen had two values of error, which were collected from two perpendicular planes. The resection errors were classified into four grades: ≤ 3 mm; 3 to 6 mm; 6 to 9 mm; and > 9 mm or any tumor violation. Student's t-test was used for statistical comparison of the mean resection errors of the two groups. The mean of 36 resection errors of 18 pelves in the AR-assisted resection group was 1.59 mm (SD, 4.13 mm; 95% confidence interval [CI], 0.24-2.94 mm) and the mean error of the conventional resection group was 4.55 mm (SD, 9.7 mm; 95% CI, 1.38-7.72 mm; p < 0.001). All specimens in the AR-assisted resection group had errors < 6 mm, whereas 78% (28 of 36) of errors in the conventional group were < 6 mm. In this in vitro simulated tumor model, we demonstrated that AR assistance could help to achieve the planned margin. Our model was designed as a proof of concept; although our findings do not justify a clinical trial in humans, they do support continued investigation of this system in a live animal model, which will be our next experiment. The AR-based navigation system provides additional information of the tumor extent and may help surgeons during pelvic bone cancer surgery without the need for more complex and cumbersome conventional navigation systems.

  • Research Article
  • Cite Count Icon 80
  • 10.1038/ijos.2013.26
Real-time in situ three-dimensional integral videography and surgical navigation using augmented reality: a pilot study
  • May 24, 2013
  • International Journal of Oral Science
  • Hideyuki Suenaga + 7 more

To evaluate the feasibility and accuracy of a three-dimensional augmented reality system incorporating integral videography for imaging oral and maxillofacial regions, based on preoperative computed tomography data. Three-dimensional surface models of the jawbones, based on the computed tomography data, were used to create the integral videography images of a subject's maxillofacial area. The three-dimensional augmented reality system (integral videography display, computed tomography, a position tracker and a computer) was used to generate a three-dimensional overlay that was projected on the surgical site via a half-silvered mirror. Thereafter, a feasibility study was performed on a volunteer. The accuracy of this system was verified on a solid model while simulating bone resection. Positional registration was attained by identifying and tracking the patient/surgical instrument's position. Thus, integral videography images of jawbones, teeth and the surgical tool were superimposed in the correct position. Stereoscopic images viewed from various angles were accurately displayed. Change in the viewing angle did not negatively affect the surgeon's ability to simultaneously observe the three-dimensional images and the patient, without special glasses. The difference in three-dimensional position of each measuring point on the solid model and augmented reality navigation was almost negligible (<1 mm); this indicates that the system was highly accurate. This augmented reality system was highly accurate and effective for surgical navigation and for overlaying a three-dimensional computed tomography image on a patient's surgical area, enabling the surgeon to understand the positional relationship between the preoperative image and the actual surgical site, with the naked eye.

  • Research Article
  • 10.1007/s11548-024-03164-5
Simulated augmented reality-based calibration of optical see-through head mound display for surgical navigation.
  • May 23, 2024
  • International journal of computer assisted radiology and surgery
  • Ho-Gun Ha + 4 more

Calibration of an optical see-through head-mounted display is critical for augmented reality-based surgical navigation. While conventional methods have advanced, calibration errors remain significant. Moreover, prior research has focused primarily on calibration accuracy and procedure, neglecting the impact on the overall surgical navigation system. Consequently, these enhancements do not necessarily translate to accurate augmented reality in the optical see-through head mount due to systemic errors, including those in calibration. This study introduces a simulated augmented reality-based calibration to address these issues. By replicating the augmented reality that appeared in the optical see-through head mount, the method achieves calibration that compensates for augmented reality errors, thereby reducing them. The process involves two distinct calibration approaches, followed by adjusting the transformation matrix to minimize displacement in the simulated augmented reality. The efficacy of this method was assessed through two accuracy evaluations: registration accuracy and augmented reality accuracy. Experimental results showed an average translational error of 2.14mm and rotational error of 1.06° across axes in both approaches. Additionally, augmented reality accuracy, measured by the overlay regions' ratio, increased to approximately 95%. These findings confirm the enhancement in both calibration and augmented reality accuracy with the proposed method. The study presents a calibration method using simulated augmented reality, which minimizes augmented reality errors. This approach, requiring minimal manual intervention, offers a more robust and precise calibration technique for augmented reality applications in surgical navigation.

  • Conference Article
  • Cite Count Icon 2
  • 10.1109/globalsip.2017.8309188
Virtual reality realization technology and its application based on augmented reality
  • Nov 1, 2017
  • Ning Li + 6 more

We propose an Augmented Reality (AR) implementation technology based on Virtual Reality (VR), and build a system which contains a headset VR equipment, one depth camera, one color camera and a smart phone. We will collect the depth and color information through these two cameras. When the depth and color information are given, we can integrate them into four channel images fused by RGB color image and depth-image information (RGB-D), then use simultaneous localization and mapping (SLAM) to transform RGB-D image sequence into a real scene 3D model, and identify the layout of the scene on this basis. Furthermore, by the means of the Augmented Reality technology we can add auxiliary information about the objects in the scene and place the virtual objects arbitrarily into reality to achieve the fusion of virtual objects and the real scene. Finally we produce the 3D visual effect which is able to be observed by human eyes, and achieve virtual reality effect by the headset device. Through our method, virtual and real objects can be seen on the screen created by the depth and color camera to achieve the interaction of virtual and real scene.

  • Research Article
  • Cite Count Icon 1
  • 10.1007/s11548-025-03328-x
Depth-based registration of 3D preoperative models to intraoperative patient anatomy using the HoloLens 2
  • Mar 14, 2025
  • International Journal of Computer Assisted Radiology and Surgery
  • Enzo Kerkhof + 6 more

PurposeIn augmented reality (AR) surgical navigation, a registration step is required to align the preoperative data with the patient. This work investigates the use of the depth sensor of HoloLens 2 for registration in surgical navigation.MethodsAn AR depth-based registration framework was developed. The framework aligns preoperative and intraoperative point clouds and overlays the preoperative model on the patient. For evaluation, three experiments were conducted. First, the accuracy of the HoloLens’s depth sensor was evaluated for both Long-Throw (LT) and Articulated Hand Tracking (AHAT) modes. Second, the overall registration accuracy was assessed with different alignment approaches. The accuracy and success rate of each approach were evaluated. Finally, a qualitative assessment of the framework was performed on various objects.ResultsThe depth accuracy experiment showed mean overestimation errors of 5.7 mm for AHAT and 9.0 mm for LT. For the overall alignment, the mean translation errors of the different methods ranged from 12.5 to 17.0 mm, while rotation errors ranged from 0.9 to 1.1 degrees.ConclusionThe results show that the depth sensor on the HoloLens 2 can be used for image-to-patient alignment with 1–2 cm accuracy and within 4 s, indicating that with further improvement in the accuracy, this approach can offer a convenient alternative to other time-consuming marker-based approaches. This work provides a generic marker-less registration framework using the depth sensor of the HoloLens 2, with extensive analysis of the sensor’s reconstruction and registration accuracy. It supports advancing the research of marker-less registration in surgical navigation.

  • Abstract
  • Cite Count Icon 11
  • 10.1080/07853890.2018.1560068
Extending medical interfaces towards virtual reality and augmented reality
  • Mar 29, 2019
  • Annals of Medicine
  • Daniel Simões Lopes + 1 more

Introduction: The growing interest of Augmented Reality (AR) together with the renaissance of Virtual Reality (VR) opened new possibilities to redesign how professionals interact with medical images. Several medical specialties already rely on 2D and 3D image data for diagnosis, surgical planning, surgical navigation, medical education and patient-clinician communication. However, the vast majority of conventional medical interfaces and interaction techniques continue unchanged, while the most innovative solutions have not yet untapped the full potential of VR and AR because extending conventional workstations to accommodate VR and AR interaction paradigms is not free of challenges. Notably, VR and AR-based workstations, besides having to render complex anatomical data in interactive frame rates, must also promote proper anatomical insight, boost visual memory through seamless visual collaboration between professionals, unrestrained interaction from being seated at a desk (e.g., using mouse and keyboard) to adopt non-stationary postures and freely walk within a work space, and must support a fluid exchange of image data and 3D models as this foments interesting discussions to solve clinical cases. Moreover, VR and AR-based workstations should likewise be designed according to good human-computer interaction principles since it is well known that medical professionals can be resistant to changes in their workflow. To meet these challenges, we present several case studies that serve as proofs of concept represented as VR or AR prototypes, which were tested by professionals of different medical specialities. Case studies: We have identified specific areas where VR and AR paradigms can make a difference on how healthcare professionals visualize, manipulate and read 2D and 3D medical image data. The most obvious is medical education, where we are currently exploring implant planning using mobile AR and a tablet to train dentistry students. Another specific area is radiodiagnostics, where we evaluated the positive impact of VR, namely, on how radiologists analyze images when immersed inside a virtual reading room [1], and on how radiologists may perform immersive virtual colonography. Also in surgical navigation, AR head-mounted displays carry the promise to assist surgeons performing laparoscopic surgery, in particular, how optical see-through head-mounted displays can assist the surgeon’s eye-hand coordination and to clarify the location of anatomical landmarks displayed on the video stream. Finally, we explore how AR, using multiple projection surfaces, can assist users to perform rehabilitation exercises by themselves under the offline supervision of a physiotherapist [2].

  • Research Article
  • Cite Count Icon 73
  • 10.1007/s11548-011-0660-7
Evaluation of a portable image overlay projector for the visualisation of surgical navigation data: phantom studies
  • Oct 21, 2011
  • International Journal of Computer Assisted Radiology and Surgery
  • K Gavaghan + 6 more

Presenting visual feedback for image-guided surgery on a monitor requires the surgeon to perform time-consuming comparisons and diversion of sight and attention away from the patient. Deficiencies in previously developed augmented reality systems for image-guided surgery have, however, prevented the general acceptance of any one technique as a viable alternative to monitor displays. This work presents an evaluation of the feasibility and versatility of a novel augmented reality approach for the visualisation of surgical planning and navigation data. The approach, which utilises a portable image overlay device, was evaluated during integration into existing surgical navigation systems and during application within simulated navigated surgery scenarios. A range of anatomical models, surgical planning data and guidance information taken from liver surgery, cranio-maxillofacial surgery, orthopaedic surgery and biopsy were displayed on patient-specific phantoms, directly on to the patient's skin and on to cadaver tissue. The feasibility of employing the proposed augmented reality visualisation approach in each of the four tested clinical applications was qualitatively assessed for usability, visibility, workspace, line of sight and obtrusiveness. The visualisation approach was found to assist in spatial understanding and reduced the need for sight diversion throughout the simulated surgical procedures. The approach enabled structures to be identified and targeted quickly and intuitively. All validated augmented reality scenes were easily visible and were implemented with minimal overhead. The device showed sufficient workspace for each of the presented applications, and the approach was minimally intrusiveness to the surgical scene. The presented visualisation approach proved to be versatile and applicable to a range of image-guided surgery applications, overcoming many of the deficiencies of previously described AR approaches. The approach presents an initial step towards a widely accepted alternative to monitor displays for the visualisation of surgical navigation data.

  • Abstract
  • Cite Count Icon 2
  • 10.1016/j.arthro.2008.04.052
Knee Kinematics After Double Bundle Versus Computer-Navigated Single-Bundle Anterior Cruciate Ligament Reconstruction (SS-52)
  • May 29, 2008
  • Arthroscopy: The Journal of Arthroscopic &amp; Related Surgery
  • Aaron Gardiner + 3 more

Knee Kinematics After Double Bundle Versus Computer-Navigated Single-Bundle Anterior Cruciate Ligament Reconstruction (SS-52)

  • Research Article
  • Cite Count Icon 60
  • 10.2500/ajra.2014.28.4067
Inattentional blindness increased with augmented reality surgical navigation.
  • Sep 1, 2014
  • American Journal of Rhinology &amp; Allergy
  • Benjamin J Dixon + 5 more

Augmented reality (AR) surgical navigation systems, designed to increase accuracy and efficiency, have been shown to negatively impact on attention. We wished to assess the effect "head-up" AR displays have on attention, efficiency, and accuracy, while performing a surgical task, compared with the same information being presented on a submonitor (SM). Fifty experienced otolaryngology surgeons (n = 42) and senior otolaryngology trainees (n = 8) performed an endoscopic surgical navigation exercise on a predissected cadaveric model. Computed tomography-generated anatomic contours were fused with the endoscopic image to provide an AR view. Subjects were randomized to perform the task with a standard endoscopic monitor with the AR navigation displayed on an SM or with AR as a single display. Accuracy, task completion time, and the recognition of unexpected findings (a foreign body and a critical complication) were recorded. Recognition of the foreign body was significantly better in the SM group (15/25 [60%]) compared with the AR alone group (8/25 [32%]; p = 0.02). There was no significant difference in task completion time (p = 0.83) or accuracy (p = 0.78) between the two groups. Providing identical surgical navigation on a SM, rather than on a single head-up display, reduced the level of inattentional blindness as measured by detection of unexpected findings. These gains were achieved without any measurable impact on efficiency or accuracy. AR displays may distract the user and we caution injudicious adoption of this technology for medical procedures.

  • Conference Article
  • Cite Count Icon 4
  • 10.1109/ismar-adjunct54149.2021.00077
Device-Agnostic Augmented Reality Rendering Pipeline for AR in Medicine
  • Oct 1, 2021
  • Fabrizio Cutolo + 4 more

Visual augmented reality (AR) headsets have the potential to enhance surgical navigation by providing physicians with an egocentric visualization interface capable of seamlessly blending the virtual navigation aid with the real surgical scenario. However, technological and human-factor limitations still hinder the routine use of commercial AR headsets in clinical practice. The aim of this work is to unveil the AR rendering pipeline of a device-agnostic software framework conceived to fulfill strict requirements towards the realization of a functional and reliable AR-based surgical navigator and capable of supporting the deployment of AR applications for image-guided surgery on different AR headsets. The AR rendering pipeline provides highly accurate AR overlay under both video and optical see-through modalities with almost no perceivable difference in terms of perception of relative distances and depths when used in the peripersonal space. The rendering pipeline allows the setting of the intrinsic and extrinsic projection parameters of the virtual rendering cameras offline and at runtime: under video see-through modality, the rendering pipeline can be modified to adapt the warping of the camera frames and pursue an orthostereoscopic and almost natural perception of the real scene in the peripersonal space. Similarly, under optical see-through modality, the calibrated intrinsic and extrinsic parameters of the eye-display model can be updated by the user to account for the actual user’s eye position. The results of the performance tests with an eye-replacement camera show an average motion-to-photon latency of around 110 ms for both AR rendering modalities. The AR platform for surgical navigation has already proven its efficacy and reliability under VST modality during real surgical operations in craniomaxillofacial surgery.

  • Research Article
  • Cite Count Icon 1
  • 10.2174/1874387001004010134
Influence of Lower Limb Clinical Physical Measurements of Female Athletes on Knee Motion During Continuous Jump Testing
  • Jan 1, 2010
  • The Open Sports Medicine Journal
  • Yasuharu Nagano + 5 more

Objectives: To assess the relationship between dynamic knee motion in female athletes during landing after jumping and lower limb clinical physical measurements, considered risk factors for anterior cruciate ligament (ACL) injury. We proposed that (1) knee valgus and flexion angles during landing are correlated with clinical physical measurements; (2) combining these measurements enables prediction of the knee valgus and flexion angles during landing. Methods: Sixty-one female collegiate basketball athletes performed a continuous jump test; the peak knee valgus and flexion angles were measured. The Q-angle, the ranges of motion (ROMs) of hip internal rotation (IR) and external rotation (ER), as well as ankle dorsiflexion (DF), navicular drop, leg-heel alignment, and balance ability as assessed by the Star Excursion Balance Test (SEBT) were measured. Stepwise linear regression analyses were used to assess whether these factors can predict the peak knee valgus or flexion angle. Results: Increased ROM of hip IR and navicular drop predicted 7.9% of the peak knee valgus angle variance. Increased ROMs of ankle DF and hip IR, navicular drop, and anterior balance predicted 29.0% of the peak knee flexion angle variance. The knee valgus and flexion angles during the continuous jump test were slightly correlated with clinical physical measurements. Conclusions: Proximal and distal joint alignment and balance ability influence knee motion during landing. The relationship between knee motion during landing and these factors is weak; therefore, lower limb movement during landing is almost independent of clinical physical measurements, and knee movement should be evaluated by itself.

  • Research Article
  • 10.3389/conf.fneur.2016.59.00023
Soft tissue treatment and neurological stimulation to increase range of motion after total knee replacement surgery
  • Jan 1, 2016
  • Frontiers in Neurology
  • Ellis Marc + 3 more

Soft tissue treatment and neurological stimulation to increase range of motion after total knee replacement surgery

  • Supplementary Content
  • Cite Count Icon 2
  • 10.1227/ons.0000000000001009
Evaluation Metrics for Augmented Reality in Neurosurgical Preoperative Planning, Surgical Navigation, and Surgical Treatment Guidance: A Systematic Review
  • Dec 26, 2023
  • Operative Neurosurgery
  • Tessa M Kos + 4 more

BACKGROUND AND OBJECTIVE:Recent years have shown an advancement in the development of augmented reality (AR) technologies for preoperative visualization, surgical navigation, and intraoperative guidance for neurosurgery. However, proving added value for AR in clinical practice is challenging, partly because of a lack of standardized evaluation metrics. We performed a systematic review to provide an overview of the reported evaluation metrics for AR technologies in neurosurgical practice and to establish a foundation for assessment and comparison of such technologies.METHODS:PubMed, Embase, and Cochrane were searched systematically for publications on assessment of AR for cranial neurosurgery on September 22, 2022. The findings were reported according to the Preferred Reporting Items for Systematic Reviews and Meta-Analyses guidelines.RESULTS:The systematic search yielded 830 publications; 114 were screened full text, and 80 were included for analysis. Among the included studies, 5% dealt with preoperative visualization using AR, with user perception as the most frequently reported metric. The majority (75%) researched AR technology for surgical navigation, with registration accuracy, clinical outcome, and time measurements as the most frequently reported metrics. In addition, 20% studied the use of AR for intraoperative guidance, with registration accuracy, task outcome, and user perception as the most frequently reported metrics.CONCLUSION:For quality benchmarking of AR technologies in neurosurgery, evaluation metrics should be specific to the risk profile and clinical objectives of the technology. A key focus should be on using validated questionnaires to assess user perception; ensuring clear and unambiguous reporting of registration accuracy, precision, robustness, and system stability; and accurately measuring task performance in clinical studies. We provided an overview suggesting which evaluation metrics to use per AR application and innovation phase, aiming to improve the assessment of added value of AR for neurosurgical practice and to facilitate the integration in the clinical workflow.

  • Research Article
  • Cite Count Icon 1
  • 10.1097/00005768-200405001-00720
Age and Gender Effects on Landing Kinematics of Youth Soccer Players during a Stop-jump task
  • May 1, 2004
  • Medicine &amp; Science in Sports &amp; Exercise
  • Bing Yu + 5 more

1063 PURPOSE: Anterior cruciate ligament (ACL) injury is one of the most commonly seen knee injuries in sports. Motor control has been identified as a potential risk factor for non-contact ACL injuries. Previous studies reported gender differences in lower extremity kinematics and kinetics in selected athletic tasks. The purpose of this study was to investigate age effect on gender differences in lower extremity kinematics of youth soccer players during a stop-jump task. METHODS: Sixty youth recreational soccer players between 11 and 16 years of age were recruited and divided into 6 age groups with 5 males and 5 females in each group. Each subject performed five trials of the stop-jump task with maximum vertical jump effort. Three-dimensional hip and knee angles were collected. Regression analyses with dummy variables were performed to determine the trends of age and gender effects on body mass, standing height, relative body mass (body mass/standing height), and hip and knee joint angles during the landing of the stop-jump task. RESULTS: Age and gender have significant interaction effects on the knee and hip flexion angles and motions of youth recreational soccer players during the landing of the stop-jump task. Female and male players demonstrate similar knee flexion angles at initial ground contact and maximum knee flexion angles before 12 years of age. Female players had less knee flexion at the initial ground contact and maximum knee flexion after 13 years of age. These gender differences increased with age. Similar trends of age and gender interaction effects were also found for hip flexion at the initial contact and maximum knee flexion. The trends of age and gender interaction effects on knee and hip flexion angles were similar to those on body mass, standing height, and relative body mass (body mass/standing height). Female players also showed significantly increased knee valgus angles and increased hip external rotation angles during landing. CONCLUSIONS: Some of the gender differences in lower extremity motion patterns may occur after certain ages, and may be related to the physiological development such as strength. The findings of this study provide a basis for further studies on the age and gender differences in lower extremity motion patterns and on the prevention of non-contact ACL injuries.

  • PDF Download Icon
  • Research Article
  • Cite Count Icon 21
  • 10.3390/jcm12165203
Augmented and Virtual Reality for Preoperative Trauma Planning, Focusing on Orbital Reconstructions: A Systematic Review.
  • Aug 10, 2023
  • Journal of Clinical Medicine
  • Kathia Dubron + 5 more

This systematic review summarizes recent literature on the use of extended reality, including augmented reality (AR), mixed reality (MR), and virtual reality (VR), in preoperative planning for orbital fractures. A systematic search was conducted in PubMed, Embase, Web of Science and Cochrane on 6 April 2023. The included studies compared extended reality with conventional planning techniques, focusing on computer-aided surgical simulation based on Computed Tomography data, patient-specific implants (PSIs), fracture reconstruction of the orbital complex, and the use of extended reality. Outcomes analyzed were technical accuracy, planning time, operative time, complications, total cost, and educational benefits. A total of 6381 articles were identified. Four articles discussed the educational use of VR, while one clinical prospective study examined AR for assisting orbital fracture management. AR was demonstrated to ameliorate the accuracy and precision of the incision and enable the better identification of deep anatomical tissues in real time. Consequently, intraoperative imaging enhancement helps to guide the orientation of the orbital reconstruction plate and better visualize the precise positioning and fixation of the PSI of the fractured orbital walls. However, the technical accuracy of 2-3 mm should be considered. VR-based educational tools provided better visualization and understanding of craniofacial trauma compared to conventional 2- or 3-dimensional images.

Save Icon
Up Arrow
Open/Close
  • Ask R Discovery Star icon
  • Chat PDF Star icon

AI summaries and top papers from 250M+ research sources.