Physics-Inspired Generative Models in Medical Imaging.

  • Abstract
  • Literature Map
  • References
  • Similar Papers
Abstract
Translate article icon Translate Article Star icon
Take notes icon Take Notes

Physics-inspired generative models (GMs), in particular diffusion models and Poisson flow models, enhance Bayesian methods and promise great utility in medical imaging. This review examines the transformative role of such generative methods. First, a variety of physics-inspired GMs, including denoising diffusion probabilistic models, score-based diffusion models, and Poisson flow generative models (including PFGM++), are revisited, with an emphasis on their accuracy, robustness and acceleration. Then, major applications of physics-inspired GMs in medical imaging are presented, comprising image reconstruction, image generation, and image analysis. Finally, future research directions are brainstormed, including unification of physics-inspired GMs, integration with vision-language models, and potential novel applications of GMs. Since the development of generative methods has been rapid, it is hoped that this review will give peers and learners a timely snapshot of this new family of physics-driven GMs and help capitalize their enormous potential for medical imaging.

ReferencesShowing 10 of 27 papers
  • Open Access Icon
  • PDF Download Icon
  • Cite Count Icon 118
  • 10.1109/access.2022.3218802
Model-Based Deep Learning: On the Intersection of Deep Learning and Optimization
  • Jan 1, 2022
  • IEEE Access
  • Nir Shlezinger + 2 more

  • Open Access Icon
  • Cite Count Icon 15471
  • 10.1109/tit.2005.862083
Robust uncertainty principles: exact signal reconstruction from highly incomplete frequency information
  • Feb 1, 2006
  • IEEE Transactions on Information Theory
  • E.J Candes + 2 more

  • Open Access Icon
  • Cite Count Icon 6271
  • 10.1002/(sici)1522-2594(199911)42:5<952::aid-mrm16>3.0.co;2-s
SENSE: Sensitivity encoding for fast MRI
  • Oct 28, 1999
  • Magnetic Resonance in Medicine
  • Klaas P Pruessmann + 3 more

  • Open Access Icon
  • Cite Count Icon 545
  • 10.1017/s0962492919000059
Solving inverse problems using data-driven models
  • May 1, 2019
  • Acta Numerica
  • Simon Arridge + 3 more

  • Cite Count Icon 2630
  • 10.1016/0022-5193(70)90109-8
Algebraic Reconstruction Techniques (ART) for three-dimensional electron microscopy and X-ray photography
  • Dec 1, 1970
  • Journal of Theoretical Biology
  • Richard Gordon + 2 more

  • Open Access Icon
  • Cite Count Icon 119
  • 10.1016/j.media.2023.102872
Adaptive diffusion priors for accelerated MRI reconstruction.
  • Aug 1, 2023
  • Medical Image Analysis
  • Alper Güngör + 7 more

  • Open Access Icon
  • Cite Count Icon 289
  • 10.1109/tmi.2020.2975344
Hi-Net: Hybrid-Fusion Network for Multi-Modal MR Image Synthesis.
  • Feb 20, 2020
  • IEEE Transactions on Medical Imaging
  • Tao Zhou + 4 more

  • Open Access Icon
  • Cite Count Icon 400
  • 10.1109/tmi.2020.2973595
Generalizing Deep Learning for Medical Image Segmentation to Unseen Domains via Deep Stacked Transformation.
  • Feb 12, 2020
  • IEEE transactions on medical imaging
  • Ling Zhang + 10 more

  • Open Access Icon
  • Cite Count Icon 766
  • 10.1109/tpami.2023.3261988
Diffusion Models in Vision: A Survey.
  • Sep 1, 2023
  • IEEE Transactions on Pattern Analysis and Machine Intelligence
  • Florinel-Alin Croitoru + 3 more

  • Open Access Icon
  • PDF Download Icon
  • Cite Count Icon 14
  • 10.3389/fradi.2024.1385742
A systematic literature review: deep learning techniques for synthetic medical image generation and their applications in radiotherapy.
  • Mar 27, 2024
  • Frontiers in Radiology
  • Moiz Khan Sherwani + 1 more

Similar Papers
  • Preprint Article
  • 10.2196/preprints.51099
Comparative Analysis of Pretrained Text to Image Models for Accurate Radiological Image Generation for a Single Text Prompt (Preprint)
  • Jul 20, 2023
  • Shashwat Mookherjee + 4 more

BACKGROUND Generative AI is a rapidly advancing field within Artificial Intelligence with wide-ranging applications. In the medical science domain, machine learning and deep learning methods have already found extensive use. This study aims to conduct a comparative analysis of seven freely available pretrained text-to-image models to generate radiological images based on a single text prompt. The primary objective is to determine which among these models produces the most accurate radiological images. The research investigates the effectiveness of generative AI in the medical domain, particularly for generating radiological images. Several text-to-image models are tested, building on previous research that explored DALL-E 2's capabilities in understanding radiological images. By comparing the performance of different models on a single text prompt, the study provides valuable insights into their potential use in medical image generation. Through this investigation, the study seeks to benefit medical professionals, researchers, and the wider AI community. Identifying the most accurate text-to-image model could enhance medical imaging applications, leading to improved diagnostics and treatment planning. OBJECTIVE The objective of this article is to conduct a comparative study of seven existing pretrained text-to-image models available freely on the internet, with the specific aim of generating radiological images based on a single text prompt. The primary goal is to determine which of these text-to-image models is capable of generating the most accurate radiological images for medical applications. By evaluating the performance of various text-to-image models, the research aims to provide insights into the effectiveness of generative AI techniques in the medical domain. Specifically, the study seeks to identify the model that demonstrates superior capabilities in accurately translating textual descriptions into radiological images. The article seeks to contribute to the field of Generative AI, particularly in the context of medical science and radiological imaging. By comparing different models' outcomes on a single prompt, the research aims to offer valuable information for medical professionals and researchers, highlighting the potential applications and limitations of these text-to-image models in medical image generation. Ultimately, the objective of the article is to facilitate advancements in medical imaging technologies, leading to improved diagnostics and medical decision-making processes. Through its comparative analysis, the article endeavors to aid the AI community in selecting the most suitable text-to-image model for generating accurate and reliable radiological images. METHODS Model Selection: Seven existing pretrained text-to-image generative models available freely on the internet were chosen for the study. These models were specifically designed for generating images from textual descriptions. Model Descriptions: A brief description of each text-to-image model used in the study, along with their respective results, was provided to contextualize the findings. Image Generation: The prompt used for testing all models was: "Photorealistic MRI scan of human lungs suffering from pneumonia." Comparative Analysis: The results obtained from each model were compared and analyzed to determine which model produced the most accurate radiological image in response to the given prompt. Keeping in mind the actual imaging of the medical condition, we consulted a physiology expert to compare with the images generated by the seven different models. RESULTS The following comparison results were obtained after the consultation and are presented as follows: Dall-E 2 created the most realistic image of the lungs of a person suffering from pneumonia. It also shows the thoracic cavity with the heart in it which gives more accuracy to the image that is created. Dall-E 2 could also successfully show the difference between the left and right lung. It showed the septum which most of the models could not. Midjourney did a good job at showing the infection even though it failed to create the image as realistically as Dall-E 2. Midjourney did provide a clear image though. It might accurately show the spread of the infection as well. Min-dalle highlighted the infectious parts well, but it failed to give a more realistic image. Carefree Creator did well with the image of the thoracic cavity but it is not very reliable for the detection of infections. Big Sleep is a model which we are unsure of. If the white parts in between show the mucus congestion, then it did a nice job at showing the congestion of the lungs but did a poor job at showing the thoracic cavity. Aphantasia used bright colours which might help the detection of infections even though it failed to show the lungs and the infection accurately. Deep Daze produced a very complicated image which makes identifying parts of the body and the infection very difficult. CONCLUSIONS From the results above, we can conclude that the existing text-to-image generation models are not capable of generating radiological images with 100% accuracy. However, it must be mentioned that some of the models performed better than the others in specific cases. For eg, the image generated by DALL-E 2 was able to show the difference between the left and right lungs properly and also was able to show the thoracic cavity as compared to the image generated by Aphantasia which was able to showcase the detection of infections better than DALL-E 2 even though it failed to show the lungs accurately. This study also indicates the importance for the need of better visualisation of medical conditions in existing radiological methods. For example the use of colours to better showcase the detection of infections as shown by the image generated by Aphantasia. Of course there are many other factors which must be considered while designing a visualisation method and we aren’t suggesting any particular method which needs to be implemented immediately. Proper consultation with an expert is always the first step. These results surely are a starting step in the domain of image generation for radiological images.It is true that in this study we used only one prompt. Further steps would include giving a better text prompt of the medial condition and giving prompts of more varied medical conditions. There are a variety of applications and benefits of generating radiological images. Many Machine Learning tasks like classification and segmentation require a large dataset for training respective models appropriately and an accurate radiological image generated using AI would help in making the dataset of the required size. Further developments in this field can lead to generating radiological images of specific conditions based on the particular prompt of the user.

  • Research Article
  • 10.52783/anvi.v28.3061
Generative Models Beyond GANs: Innovations in Image and Text Synthesis
  • Dec 30, 2024
  • Advances in Nonlinear Variational Inequalities
  • Manmohan Singh

An evolution within generative models has taken place, going beyond Generative Adversarial Networks (GANs) into different structured approaches for both image and text generation. With the rise of advanced non-GAN-based generative models, the aim of this study is to analyse and ultimately compare a brief history and the effectiveness of different models such as VAEs, Diffusion Models, and architectures based on Transformers like GPT and DALL·E to find their strengths in creating high-quality, coherent, and diverse outputs across both visual and text domains. The results reveal the superior stability of diffusion models, the interpretability of VAEs, and the improved contextual understanding gained by transformer-based architectures, allowing us to better understand these models, their advantages and disadvantages, and the potential impact on design for future applications. Additionally, we examine common issues like mode collapse, training instability, and high computational resources requirements, proposing novel solutions to mitigate these challenges. Also, experimental results show that state-of-the-art (SoTA) performance can be achieved not just through GAN architectures but also through non-GAN models, which lend themselves more to diverse uses cases from creative content generation to domain-specific tasks (medical imagery, personalized content creation) much more. Finally, this paper addresses the ethics behind generative models and provides some insights on future development of generative models. The chapter concludes with a discussion of its implications for bridging theoretical and practical aspects in generative modelling, marking a new era in the field.

  • PDF Download Icon
  • Research Article
  • Cite Count Icon 6
  • 10.1002/smtd.202400672
This Microtubule Does Not Exist: Super‐Resolution Microscopy Image Generation by a Diffusion Model
  • Oct 14, 2024
  • Small Methods
  • Alon Saguy + 9 more

Generative models, such as diffusion models, have made significant advancements in recent years, enabling the synthesis of high‐quality realistic data across various domains. Here, the adaptation and training of a diffusion model on super‐resolution microscopy images are explored. It is shown that the generated images resemble experimental images, and that the generation process does not exhibit a large degree of memorization from existing images in the training set. To demonstrate the usefulness of the generative model for data augmentation, the performance of a deep learning‐based single‐image super‐resolution (SISR) method trained using generated high‐resolution data is compared against training using experimental images alone, or images generated by mathematical modeling. Using a few experimental images, the reconstruction quality and the spatial resolution of the reconstructed images are improved, showcasing the potential of diffusion model image generation for overcoming the limitations accompanying the collection and annotation of microscopy images. Finally, the pipeline is made publicly available, runnable online, and user‐friendly to enable researchers to generate their own synthetic microscopy data. This work demonstrates the potential contribution of generative diffusion models for microscopy tasks and paves the way for their future application in this field.

  • PDF Download Icon
  • Research Article
  • 10.54254/2755-2721/52/20241115
Data augmentation-based enhanced fingerprint recognition using deep convolutional generative adversarial network and diffusion models
  • Mar 27, 2024
  • Applied and Computational Engineering
  • Yukai Liu

The progress of fingerprint recognition applications encounters substantial hurdles due to privacy and security concerns, leading to limited fingerprint data availability and stringent data quality requirements. This article endeavors to tackle the challenges of data scarcity and data quality in fingerprint recognition by implementing data augmentation techniques. Specifically, this research employed two state-of-the-art generative models in the domain of deep learning, namely Deep Convolutional Generative Adversarial Network (DCGAN) and the Diffusion model, for fingerprint data augmentation. Generative Adversarial Network (GAN), as a popular generative model, effectively captures the features of sample images and learns the diversity of the sample images, thereby generating realistic and diverse images. DCGAN, as a variant model of traditional GAN, inherits the advantages of GAN while alleviating issues such as blurry images and mode collapse, resulting in improved performance. On the other hand, Diffusion, as one of the most popular generative models in recent years, exhibits outstanding image generation capabilities and surpasses traditional GAN in some image generation tasks. The experimental results demonstrate that both DCGAN and Diffusion can generate clear, high-quality fingerprint images, fulfilling the requirements of fingerprint data augmentation. Furthermore, through the comparison between DCGAN and Diffusion, it is concluded that the quality of fingerprint images generated by DCGAN is superior to the results of Diffusion, and DCGAN exhibits higher efficiency in both training and generating images compared to Diffusion.

  • Research Article
  • 10.31891/csit-2025-2-1
ANALYSIS OF DIFFUSION MODELS AND BIOMEDICAL IMAGE GENERATION TOOLS
  • Jun 26, 2025
  • Computer systems and information technologies
  • Sergii Kuzmin + 1 more

This study investigates the effective generation of realistic histopathological medical images through the fine-tuning of generative diffusion models, addressing critical needs in medical diagnostics, education, and research. High-quality synthetic histopathology images are essential for training medical professionals, augmenting limited datasets, and potentially enhancing diagnostic accuracy through machine learning applications. However, general-purpose image synthesis methods and limited annotated medical datasets pose significant challenges. Four prominent fine-tuning methods—LoRA, DreamBooth, Textual Inversion, and HyperNetwork - were systematically evaluated using the Stable Diffusion 1.5 generative model. These methods were rigorously assessed using the balanced dataset with 664 images per each distinct tissue class: normal, serrated, adenocarcinoma, and adenoma tissues. Quantitative evaluations employing Fréchet Inception Distance (FID), Precision, and Recall metrics revealed significant performance differences among the methods. HyperNetwork and DreamBooth consistently yielded superior image fidelity and diversity. Specifically, HyperNetwork achieved notably low FID scores (e.g., 77.27 for adenocarcinoma) accompanied by robust Precision and Recall results, demonstrating enhanced realism and variability. DreamBooth similarly exhibited strong performance, validating its practical utility. In contrast, Textual Inversion consistently produced the weakest outcomes, characterized by significantly higher FID scores (exceeding 158) and notably low Recall values, underscoring its inherent limitations for complex medical imaging applications. Although these quantitative insights are valuable, traditional metrics alone may not comprehensively capture clinical applicability. Therefore, qualitative evaluation by medical professionals remains essential. Additionally, there is an urgent need for developing domain-specific evaluation metrics and fine-tuning techniques explicitly tailored for histopathology imaging. Such advancements hold the potential to significantly enhance synthetic image quality and expand their clinical and educational adoption.

  • Research Article
  • 10.70393/6a69656173.323633
Generative AI Models Theoretical Foundations and Algorithmic Practices
  • Feb 11, 2025
  • Journal of Industrial Engineering and Applied Science
  • Yongnian Cao + 2 more

Generative models in AI are an entirely new paradigm for machine learning, allowing computers to create realistic data in all kinds of categories, like text (NLP), images, and even physics simulations. In this paper this formalism is used to guide the theory, algorithms and applications of generative models, with particular focus on a few well established techniques like VAEs, GANs, and diffusion models. It stresses the importance of probabilistic generative modelling and information theory (I.e. KL divergence, ELBO, adversarial optimization, etc.) We cover algorithmic practices such as optimization techniques, multimodal and conditional generation, and efficient data-driven strategies, demonstrating the impact of these methods in various real-world applications including text, image, and audio generation, industrial design, and scientific discovery. However, the fields are still grappling with significant challenges — training instability, the need for huge computational resources, and a lack of consistent, unified treatment across applications. The paper finishes with an optimistic vision of what the future has to hold, such as finding more sample efficient ways to learn, architectures to facilitate scalability on a global scale, and cohesive theoretical frameworks to bring out the very best in generative AI. By combining this theoretical understanding with practical implications, this paper will explore generative AI technologies and their potential to transform whole industries and scientific disciplines.

  • Supplementary Content
  • 10.1098/rsta.2024.0322
Generative diffusion models in infinite dimensions: a survey
  • Jun 19, 2025
  • Philosophical transactions. Series A, Mathematical, physical, and engineering sciences
  • Giulio Franzese + 1 more

Diffusion models have recently emerged as a powerful class of generative models, achieving state-of-the-art performance in various domains such as image and audio synthesis. While most existing work focuses on finite-dimensional data, there is growing interest in extending diffusion models to infinite-dimensional function spaces. This survey provides a comprehensive overview of the theoretical foundations and practical applications of diffusion models in infinite dimensions. We review the necessary background on stochastic differential equations in Hilbert spaces, and then discuss different approaches to define generative models rooted in such formalism. Finally, we survey recent applications of infinite-dimensional diffusion models in areas such as generative modelling for function spaces, conditional generation of functional data and solving inverse problems. Throughout the survey, we highlight the connections between different approaches and discuss open problems and future research directions.This article is part of the theme issue ‘Generative modelling meets Bayesian inference: a new paradigm for inverse problems’.

  • Research Article
  • Cite Count Icon 19
  • 10.1016/j.cmpb.2022.107200
Multi-domain medical image translation generation for lung image classification based on generative adversarial networks
  • Nov 2, 2022
  • Computer Methods and Programs in Biomedicine
  • Yunfeng Chen + 7 more

Multi-domain medical image translation generation for lung image classification based on generative adversarial networks

  • Research Article
  • Cite Count Icon 1
  • 10.1016/j.preteyeres.2025.101353
AI image generation technology in ophthalmology: Use, misuse and future applications.
  • May 1, 2025
  • Progress in retinal and eye research
  • Benjamin Phipps + 9 more

AI image generation technology in ophthalmology: Use, misuse and future applications.

  • Research Article
  • 10.1016/j.compmedimag.2025.102593
Diffusion model for medical image denoising, reconstruction and translation.
  • Sep 1, 2025
  • Computerized medical imaging and graphics : the official journal of the Computerized Medical Imaging Society
  • Wei Wang + 6 more

Diffusion model for medical image denoising, reconstruction and translation.

  • Preprint Article
  • 10.48550/arxiv.2306.12438
Aligning Synthetic Medical Images with Clinical Knowledge using Human Feedback
  • Jun 16, 2023
  • arXiv (Cornell University)
  • Gregory M Goldgof + 3 more

Generative models capable of capturing nuanced clinical features in medical images hold great promise for facilitating clinical data sharing, enhancing rare disease datasets, and efficiently synthesizing annotated medical images at scale. Despite their potential, assessing the quality of synthetic medical images remains a challenge. While modern generative models can synthesize visually-realistic medical images, the clinical validity of these images may be called into question. Domain-agnostic scores, such as FID score, precision, and recall, cannot incorporate clinical knowledge and are, therefore, not suitable for assessing clinical sensibility. Additionally, there are numerous unpredictable ways in which generative models may fail to synthesize clinically plausible images, making it challenging to anticipate potential failures and manually design scores for their detection. To address these challenges, this paper introduces a pathologist-in-the-loop framework for generating clinically-plausible synthetic medical images. Starting with a diffusion model pretrained using real images, our framework comprises three steps: (1) evaluating the generated images by expert pathologists to assess whether they satisfy clinical desiderata, (2) training a reward model that predicts the pathologist feedback on new samples, and (3) incorporating expert knowledge into the diffusion model by using the reward model to inform a finetuning objective. We show that human feedback significantly improves the quality of synthetic images in terms of fidelity, diversity, utility in downstream applications, and plausibility as evaluated by experts.

  • PDF Download Icon
  • Front Matter
  • Cite Count Icon 3
  • 10.1155/2011/840181
Parallel Computation in Medical Imaging Applications
  • Jan 1, 2011
  • International Journal of Biomedical Imaging
  • Yasser M Kadah + 2 more

Parallel Computation in Medical Imaging Applications

  • PDF Download Icon
  • Research Article
  • Cite Count Icon 45
  • 10.3390/rs14194834
Diffusion Model with Detail Complement for Super-Resolution of Remote Sensing
  • Sep 28, 2022
  • Remote Sensing
  • Jinzhe Liu + 5 more

Remote sensing super-resolution (RSSR) aims to improve remote sensing (RS) image resolution while providing finer spatial details, which is of great significance for high-quality RS image interpretation. The traditional RSSR is based on the optimization method, which pays insufficient attention to small targets and lacks the ability of model understanding and detail supplement. To alleviate the above problems, we propose the generative Diffusion Model with Detail Complement (DMDC) for RS super-resolution. Firstly, unlike traditional optimization models with insufficient image understanding, we introduce the diffusion model as a generation model into RSSR tasks and regard low-resolution images as condition information to guide image generation. Next, considering that generative models may not be able to accurately recover specific small objects and complex scenes, we propose the detail supplement task to improve the recovery ability of DMDC. Finally, the strong diversity of the diffusion model makes it possibly inappropriate in RSSR, for this purpose, we come up with joint pixel constraint loss and denoise loss to optimize the direction of inverse diffusion. The extensive qualitative and quantitative experiments demonstrate the superiority of our method in RSSR with small and dense targets. Moreover, the results from direct transfer to different datasets also prove the superior generalization ability of DMDC.

  • Research Article
  • Cite Count Icon 1
  • 10.1002/sdtp.16343
P‐2.9: A review of image generation methods based on deep learning
  • Apr 1, 2023
  • SID Symposium Digest of Technical Papers
  • Chuang Wang + 2 more

Image as a medium of visual information transmission have the advantages of vividness, intuition and easy understanding. They play an important role in information transmission and utilization. In recent years, due to the rapid development of deep learning technology in the field of image processing, image generative model based on neural network has become one of the current research hotspots. In the field of deep learning, unsupervised learning model has received more and more attentions, especially in the field of deep generative models, which has made breakthrough progress [1] . Among them, Variational Auto-Encoder (VAE), Generative Adversarial Network (GAN), and Diffusion Model are the three most representative research methods in the field of unsupervised learning. They have been applied more and more in the field of deep generative models. Especially, the high-quality image generative models based on the generative adversarial network continue to be hot. The diffusion model is a rising star, which is favored by more and more researchers. This paper first summarizes the main research work, improvement mechanism and features of image generation methods based on VAE and GAN, then introduces the principle of the rising diffusion model and its representative models. Finally, the advantages and limitations of the above methods are compared and analyzed, and prospects for future research are put forward.

  • Research Article
  • 10.7759/cureus.77391
Using the Regression Slope of Training Loss to Optimize Chest X-ray Generation in Deep Convolutional Generative Adversarial Networks.
  • Jan 13, 2025
  • Cureus
  • Chih-Hsiung Chen + 3 more

Diffusion models, variational autoencoders, and generative adversarial networks (GANs) are three common types of generative artificial intelligence models for image generation. Among these, GANs are the most frequently used for medical image generation and are often employed for data augmentation in various studies. However, due to the adversarial nature of GANs, where the generator and discriminator compete against each other, the training process can sometimes end with the model unable to generate meaningful images or even producing noise. This phenomenon is rarely discussed in the literature, and no studies have proposed solutions to address this issue. Such outcomes can introduce significant bias when GANs are used for data augmentation in medical image training. Moreover, GANs often require substantial computational power and storage, adding to the challenges. In this study, we used deep convolutional GANs for chest X-raygeneration, and three typical training outcomes were found. Two scenarios generated meaningful medical images and one failed to produce usable images. By analyzing the loss history during training, we observed that the regression line of the overall losses tends to diverge slowly. After excluding outlier losses, we found that the slope of the regression line within the stable loss segment indicates the optimal point to terminate training, ensuring the generation of meaningful medical images.

More from: Annual review of biomedical engineering
  • Research Article
  • 10.1146/annurev-bioeng-103023-115236
Microvascularization in 3D Human Engineered Tissue and Organoids.
  • May 1, 2025
  • Annual review of biomedical engineering
  • Yu Jung Shin + 3 more

  • Open Access Icon
  • Research Article
  • Cite Count Icon 5
  • 10.1146/annurev-bioeng-102723-065309
The Evolution of Systems Biology and Systems Medicine: From Mechanistic Models to Uncertainty Quantification.
  • May 1, 2025
  • Annual review of biomedical engineering
  • Lingxia Qiao + 3 more

  • Research Article
  • Cite Count Icon 1
  • 10.1146/annurev-bioeng-062824-121925
Designer Organs: Ethical Genetic Modifications in the Era of Machine Perfusion.
  • May 1, 2025
  • Annual review of biomedical engineering
  • Irina Filz Von Reiterdank + 8 more

  • Research Article
  • 10.1146/annurev-bioeng-102723-013922
Physics-Inspired Generative Models in Medical Imaging.
  • May 1, 2025
  • Annual review of biomedical engineering
  • Dennis Hein + 3 more

  • Research Article
  • 10.1146/annurev-bioeng-110122-015901
Emerging Technologies for Multiphoton Writing and Reading of Polymeric Architectures for Biomedical Applications.
  • May 1, 2025
  • Annual review of biomedical engineering
  • Jieliyue Sun + 4 more

  • Research Article
  • 10.1146/annurev-bioeng-110222-100246
Understanding the Lymphatic System: Tissue-on-Chip Modeling.
  • May 1, 2025
  • Annual review of biomedical engineering
  • William J Polacheck + 2 more

  • Research Article
  • 10.1146/annurev-bioeng-110222-103522
Microfabricated Organ-Specific Models of Tumor Microenvironments
  • May 1, 2025
  • Annual review of biomedical engineering
  • Jeong Min Oh + 3 more

  • Research Article
  • Cite Count Icon 1
  • 10.1146/annurev-bioeng-112823-103134
A Theoretical Approach in Applying High-Frequency Acoustic and Elasticity Microscopy to Assess Cells and Tissues.
  • May 1, 2025
  • Annual review of biomedical engineering
  • Frank Winterroth + 5 more

  • Research Article
  • Cite Count Icon 1
  • 10.1146/annurev-bioeng-110122-120158
Neurons as Immunomodulators: From Rapid Neural Activity to Prolonged Regulation of Cytokines and Microglia.
  • May 1, 2025
  • Annual review of biomedical engineering
  • Levi B Wood + 1 more

  • Research Article
  • 10.1146/annurev-bioeng-103023-122327
Human Organoids as an Emerging Tool for Genome Screenings.
  • May 1, 2025
  • Annual review of biomedical engineering
  • Francesco Andreatta + 2 more

Save Icon
Up Arrow
Open/Close
  • Ask R Discovery Star icon
  • Chat PDF Star icon

AI summaries and top papers from 250M+ research sources.

Search IconWhat is the difference between bacteria and viruses?
Open In New Tab Icon
Search IconWhat is the function of the immune system?
Open In New Tab Icon
Search IconCan diabetes be passed down from one generation to the next?
Open In New Tab Icon