Phase-space synthesized digital holography for high space-bandwidth product imaging
High resolution and wide field of view (FOV) are always the goals of imaging, which are related to the space-bandwidth product (SBP) of the system. Currently, most methods focus on either resolution enhancement or FOV extension. Few works pay attention on both. There lacks a generalized framework for the joint space-frequency SBP expansion. We propose such a holographic imaging method, termed phase-space synthesized digital holography (PSH), which can improve and adjust resolution and FOV simultaneously, based on a phase-space analysis. Through a controllable SBP expansion in the phase space by multiangle divergent spherical wave illumination, a synthesized hologram is obtained to reconstruct a resolution-enhanced and FOV-extended image. As a general methodology of SBP expansion, the proposed method could open new insights for the imaging community.
- Dissertation
- 10.7907/z9h9937r.
- Jan 1, 2017
Demands for an imaging system which has high space-bandwidth product (SBP) are increasing in modern biomedical research as the amount of information to be dealt with is increasing. However, conventional microscopy has a limited SBP of about 10 mega pixels, and as such if a user wants an image in high resolution, the field of view (FOV) of the image is reduced, or if a wide FOV is necessary, the user needs to give up the resolution of image. A common way of overcoming this SBP limit in the conventional microscopy is to use mechanical moving stages and scan through wide sample area, however, it is time consuming to image large area using a high numerical aperture (NA) objective lens. This thesis presents compact imaging systems based on Fourier ptychographic microscopy for biomedical applications which are able to increase SBP without having any mechanical moving parts: one imaging system for an incubator embedded imaging system to be used in in-vitro cell culture monitoring, and the other for a high throughput 96 well plate imaging system for fast drug screening.
- Conference Article
4
- 10.1117/12.842778
- Feb 11, 2010
There is always a tradeoff between resolution and Field of View (FOV) in an imaging system. This limit can be due to the number of pixels in the detector, however a fundamental limit also exists in any optical system called the Space Bandwidth Product (SBP) which scales as the FOV area divided by the area of the diffraction limited spot. The SBP can only be increased by increasing the size of the optical system. In applications where the size of the optical system is constrained such as endoscopes, the SBC will ultimately limit the resolution or FOV. However, there is a way to provide both high resolution and a wide FOV without changing the total number of pixels in the image. The technique is called foveated imaging because is mimics this characteristic of the human eye in which the fovea has a higher resolution at the center of the FOV than the surrounding retina. A similar effect can be achieved optically by introducing a large amount of barrel distortion in the lens design. The result is an effective increase in the magnification at the center of the FOV, and reduced resolution but larger angular sampling at the edge. The stretching effect of the distortion can be compensated for computationally to provide an onscreen display that is not distorted, but merely appears blurred at the edges. Such an objective will enable for endomicroscopy while still providing "peripheral vision" to allow endoscopists to navigate and locate regions of interest.
- Research Article
332
- 10.1364/optica.2.000904
- Oct 14, 2015
- Optica
We demonstrate a new computational illumination technique that achieves large space-bandwidth-time product, for quantitative phase imaging of unstained live samples in vitro. Microscope lenses can have either large field of view (FOV) or high resolution, not both. Fourier ptychographic microscopy (FPM) is a new computational imaging technique that circumvents this limit by fusing information from multiple images taken with different illumination angles. The result is a gigapixel-scale image having both wide FOV and high resolution, i.e. large space-bandwidth product (SBP). FPM has enormous potential for revolutionizing microscopy and has already found application in digital pathology. However, it suffers from long acquisition times (on the order of minutes), limiting throughput. Faster capture times would not only improve imaging speed, but also allow studies of live samples, where motion artifacts degrade results. In contrast to fixed (e.g. pathology) slides, live samples are continuously evolving at various spatial and temporal scales. Here, we present a new source coding scheme, along with real-time hardware control, to achieve 0.8 NA resolution across a 4x FOV with sub-second capture times. We propose an improved algorithm and new initialization scheme, which allow robust phase reconstruction over long time-lapse experiments. We present the first FPM results for both growing and confluent in vitro cell cultures, capturing videos of subcellular dynamical phenomena in popular cell lines undergoing division and migration. Our method opens up FPM to applications with live samples, for observing rare events in both space and time.
- Research Article
6
- 10.1007/s11548-018-1838-z
- Aug 11, 2018
- International journal of computer assisted radiology and surgery
To evaluate the applicability of the Calypso® wireless transponder tracking system (Varian Medical Systems Inc., USA) for real-time tumor motion tracking during surgical procedures on tumors in non-rigid target areas. An accuracy assessment was performed for an extended electromagnetic field of view (FoV) of 27.5 × 27.5 × 22.5cm (which included the standard FoV of 14 × 14 × 19cm) in which 5DOF wireless Beacon® transponders can be tracked. Using a custom-made measurement setup, we assessed single transponder relative accuracy, absolute accuracy and jitter throughout the extended FoV at 1440 locations interspaced with 2.5cm in each orthogonal direction. The NDI Polaris Spectra optical tracking system (OTS) was used as a reference. Measurements were taken in a room without surrounding distorting factors and repeated in an operating room (OR). In the OR, the influence of a carbon fiber and regular stainless steel OR tabletop was investigated. The calibration of the OTS and transponder system resulted in an average root-mean-square error (RMSE) vector of 0.03cm. For both the standard and extended FoV, all accuracy measures were dependent on transponder to tracking array (TA) distances and the absolute accuracy was also dependent on TA to OR tabletop distances. This latter influence was reproducible, and after calibrating this, the residual error was below 0.1cm RMSE within the entire standard FoV. Within the extended FoV, this residual RMSE did not exceed 0.1cm for transponder to TA distances up to 25cm. This study shows that transponder tracking is promising for accurate tumor tracking in the operating room. This applies when using the standard FoV, but also when using the extended FoV up to 25cm above the TA, substantially increasing flexibility.
- Research Article
39
- 10.1117/1.ap.4.5.056002
- Sep 27, 2022
- Advanced Photonics
Transport of intensity equation (TIE) is a well-established non-interferometric phase retrieval approach that enables quantitative phase imaging (QPI) by simply measuring intensity images at multiple axially displaced planes. The advantage of a TIE-based QPI system is its compatibility with partially coherent illumination, which provides speckle-free imaging with resolution beyond the coherent diffraction limit. However, TIE is generally implemented with a brightfield (BF) configuration, and the maximum achievable imaging resolution is still limited to the incoherent diffraction limit (twice the coherent diffraction limit). It is desirable that TIE-related approaches can surpass this limit and achieve high-throughput [high-resolution and wide field of view (FOV)] QPI. We propose a hybrid BF and darkfield transport of intensity (HBDTI) approach for high-throughput quantitative phase microscopy. Two through-focus intensity stacks corresponding to BF and darkfield illuminations are acquired through a low-numerical-aperture (NA) objective lens. The high-resolution and large-FOV complex amplitude (both quantitative absorption and phase distributions) can then be synthesized based on an iterative phase retrieval algorithm taking the coherence model decomposition into account. The effectiveness of the proposed method is experimentally verified by the retrieval of the USAF resolution target and different types of biological cells. The experimental results demonstrate that the half-width imaging resolution can be improved from 1230 nm to 488 nm with 2.5 × expansion across a 4 × FOV of 7.19 mm2, corresponding to a 6.25 × increase in space-bandwidth product from ∼5 to ∼30.2 megapixels. In contrast to conventional TIE-based QPI methods where only BF illumination is used, the synthetic aperture process of HBDTI further incorporates darkfield illuminations to expand the accessible object frequency, thereby significantly extending the maximum available resolution from 2NA to ∼5NA with a ∼5 × promotion of the coherent diffraction limit. Given its capability for high-throughput QPI, the proposed HBDTI approach is expected to be adopted in biomedical fields, such as personalized genomics and cancer diagnostics.
- Research Article
11
- 10.1016/j.optcom.2017.05.036
- Jun 6, 2017
- Optics Communications
Group-based sparse representation for Fourier ptychography microscopy
- Conference Article
1
- 10.1117/12.2511336
- Mar 7, 2019
Based on artificial compound eyes and human vision mechanisms, we propose a hybrid bionic imaging method to achieve field of view (FOV) extension and foveated imaging simultaneously. The imaging model of the proposed method is built, and the key parameters are deduced. Then, simulations are carried out to estimate the properties of the model, including FOV extension ratio (FER), foveal ratio, fovea moving range and so on. Finally, a prototype is developed, and imaging experiments are carried out. The experimental results accord with the simulations well, proving the potential of the proposed method for intelligent surveillance, automatic object detection and recognition with low cost.
- Conference Article
- 10.1117/12.2507006
- Mar 4, 2019
We investigate quantitative phase imaging techniques based on oblique illumination including differential phase contrast microscopy (DPC) and Fourier Ptychography Microscopy (FPM). DPC uses partially coherent, asymmetric illumination to achieve 2X resolution improvement but has small field of view (FOV). FPM achieves both wide FOV and high resolution but requires a large number of measurements. Achieving high space-bandwidth product (SBP) imaging in real-time remains challenging. Our goal is to develop a data-driven approach to enable highly multiplexed illumination to substantially improve the acquisition speed for high-SBP quantitative phase imaging. To do so, we abandon the traditional sampling strategy and phase retrieval algorithms. Instead we design a convolutional neural network (CNN) that uses only 4 brightfield and 3 darkfield images under asymmetrically coded illuminations as input and predicts high-SBP phase images. Particularly, instead of restoring a deterministic image, our CNN predicts pixel-wise probability distributions (Laplace) that is characterized by the location and scale. The predicted location map corresponds to the desired high-resolution phase image; in addition, the scale map provides per-pixel confidence of the prediction. Additionally, we show the potential of transfer learning that with minor extra training, the CNN can be optimized for different cell types. Experimental results demonstrate that the proposed method is robust against experimental imperfections, e.g. aberrations, misalignment, and reconstructs high-SBP phase images with significantly reduced acquisition and processing times.
- Research Article
1
- 10.1364/josaa.516572
- Jun 14, 2024
- Journal of the Optical Society of America. A, Optics, image science, and vision
The space-bandwidth product (SBP) limitation makes it difficult to obtain an image with both a high spatial resolution and a large field of view (FoV) through commonly used optical imaging systems. Although FoV and spectrum stitch provide solutions for SBP expansion, they rely on spatial and spectral scanning, which lead to massive image captures and a low processing speed. To solve the problem, we previously reported a physics-driven deep SBP-expanded framework (Deep SBP+) [J. Opt. Soc. Am. A40, 833 (2023)JOAOD60740-323210.1364/JOSAA.480920]. Deep SBP+ can reconstruct an image with both high spatial resolution and a large FoV from a low-spatial-resolution image in a large FoV and several high-spatial-resolution images in sub-FoVs. In physics, Deep SBP+ reconstructs the convolution kernel between the low- and high-spatial-resolution images and improves the spatial resolution through deconvolution. But Deep SBP+ needs multiple high-spatial-resolution images in different sub-FoVs, inevitably complicating the operations. To further reduce the image captures, we report an updated version of Deep SBP+ 2.0, which can reconstruct an SBP expanded image from a low-spatial-resolution image in a large FoV and another high-spatial-resolution image in a sub-FoV. Different from Deep SBP+, the assumption that the convolution kernel is a Gaussian distribution is added to Deep SBP+ 2.0 to make the kernel calculation simple and in line with physics. Moreover, improved deep neural networks have been developed to enhance the generation capability. Proven by simulations and experiments, the receptive field is analyzed to prove that a high-spatial-resolution image in the sub-FoV can also guide the generation of the entire FoV. Furthermore, we also discuss the requirement of the sub-FoV image to obtain an SBP-expanded image of high quality. Considering its SBP expansion capability and convenient operation, the updated Deep SBP+ 2.0 can be a useful tool to pursue images with both high spatial resolution and a large FoV.
- Research Article
21
- 10.1063/1.5050833
- Mar 1, 2019
- Review of Scientific Instruments
A novel imaging method using Risley prisms is proposed to achieve super-resolution imaging and field of view (FOV) extension. The mathematical models are developed, and the solutions to sub-pixel imaging for super-resolution reconstruction are presented. Simulations show that the proposed method can enhance the image resolution up to optical diffraction limit of the optical system for imaging systems whose resolution is limited by pixel size. A prototype is developed. Experimental results show that the scene resolving capacity can be enhanced by 2.0 times with a resolution improvement factor of 4, and the FOV extension results accord with the simulations, providing a promising approach for super-resolution reconstruction, large FOV imaging, and foveated imaging with low cost and high efficiency.
- Conference Article
- 10.1117/12.2316494
- Mar 5, 2018
With the development of related technology gradually mature in the field of optoelectronic information, it is a great demand to design an optical system with high resolution and wide field of view(FOV). However, as it is illustrated in conventional Applied Optics, there is a contradiction between these two characteristics. Namely, the FOV and imaging resolution are limited by each other. Here, based on the study of typical wide-FOV optical system design, we propose the monocentric multi-scale system design method to solve this problem. Consisting of a concentric spherical lens and a series of micro-lens array, this system has effective improvement on its imaging quality. As an example, we designed a typical imaging system, which has a focal length of 35mm and a instantaneous field angle of 14.7”, as well as the FOV set to be 120°. By analyzing the imaging quality, we demonstrate that in different FOV, all the values of MTF at 200lp/mm are higher than 0.4 when the sampling frequency of the Nyquist is 200lp/mm, which shows a good accordance with our design.
- Conference Article
- 10.1117/12.2284458
- Oct 24, 2017
High-resolution (HR) and wide field-of-view (FOV) microscopic imaging plays a central role in diverse applications such as high-throughput screening and digital pathology. However, for bright-field microscopy system, high-resolution and wide field-of-view (FOV) always could not be achieved simultaneously, limiting its applications which require large space-bandwidth-product (SBP). Various super-resolution techniques have been proposed to break this limitation, such as on-chip sub-pixel scanning methods, structured illumination microscopy, and Fourier ptychographic microscopy (FPM). Among these super-resolution techniques, FPM became increasingly popular recently since it can combine the numerical apertures (NAs) of the objective lens and the illumination light to form a larger synthetic system NA without sacrificing the FOV. Thus, the resolution-FOV tradeoff can be effectively decoupled in FPM. In addition, it is also very convenient to build an FPM system by simply replacing the illumination system of a bright-field microscope with a commercial programmable LED board. Lately, a lot of efforts have been made to improve the accuracy and efficiency of FPM, however, to date, the effective imaging NA achievable with a typical FPM system is still limited to the range of 0.4-0.7. Here, we build an FPM platform using an oil-immersion condenser to boost the resolution of a bright-field microscopy system and significantly increase its SBP. This FPM system involves a 10X 0.4NA objective lens and a 1.2NA oil-immersion condenser to synthesize a system NA of 1.6. We confirmed the accuracy of this technique by achieving a half-pitch resolution of 154 nm at a wavelength of 435 nm with a FOV of 2.34 mm 2 , corresponding to an SBP of 98.5 megapixels (~ 50 times higher than that of the conventional incoherent microscope with the same resolution). We also demonstrated the effectiveness of this approach by imaging various biological samples, such as human blood smears. Our work indicates that FPM is an attractive method which could broadly benefit wide-field imaging applications that demand large SBP, and it still has a great potential to achieve much larger SBP of bright-field microscopes.
- Conference Article
9
- 10.2312/egve/egve04/071-078
- Jun 8, 2004
The difficulties people frequently have navigating in virtual environments (VEs) are well known. Usually these difficulties are quantified in terms of performance (e.g., time taken or number of errors made in following a path), with these data used to compare navigation in VEs to equivalent real-world settings. However, an important cause of any performance differences is changes in people's navigational behaviour. This paper reports a study that investigated the effect of visual scene fidelity and field of view (FOV) on participants' behaviour in a navigational search task, to help identify the thresholds of fidelity that are required for efficient VE navigation. With a wide FOV (144 degrees), participants spent significantly larger proportion of their time travelling through the VE, whereas participants who used a normal FOV (48 degrees) spent significantly longer standing in one place planning where to travel. Also, participants who used a wide FOV and a high fidelity scene came significantly closer to conducting the search (visiting each place once). In an earlier real-world study, participants completed 93% of their searches perfectly and planned where to travel while they moved. Thus, navigating a high fidelity VE with a wide FOV increased the similarity between VE and real-world navigational behaviour, which has important implications for both VE design and understanding human navigation. Detailed analysis of the errors that participants made during their non-perfect searches highlighted a dramatic difference between the two FOVs. With a narrow FOV participants often travelled right past a target without it appearing on the display, whereas with the wide FOV targets that were displayed towards the sides of participants overall FOV were often not searched, indicating a problem with the demands made by such a wide FOV display on human visual attention.
- Research Article
1
- 10.1364/ol.557741
- Apr 1, 2025
- Optics letters
Image resolution and field of view in far-field optical microscopy are often inversely proportional to one another due to digital sampling limitations imposed by the magnification of the system and the pixel size of the sensor. We present a method including a spatial shifting mechanism and a reconstruction algorithm that bypasses this trade-off by shifting the sample to be imaged by subpixel increments, before registering the images via phase correlation and combining the resulting registered images using the shift-and-add approach. Importantly, this method requires no specific optical components that are uncommon to commercially available or custom-built microscope systems. The findings of the presented study demonstrate an improvement to spatial resolution of ∼42% while maintaining the system's field of view (FOV), leading to a more than twofold improvement to the system's space-bandwidth product (SBP).
- Conference Article
- 10.1109/igarss.2006.588
- Jul 1, 2006
Satellite observations use very narrow field of view (FOV) (less than 0.01deg) but the field measurements use generally very wide FOV (often between 10deg and 40deg) in order to obtain representative sampling size. This difference may introduce some large errors. The objective of this paper is to propose a practical modeling method and evaluate field of view effect on the field reflectance measurements for row crops. The model considers a row crop as a repetition of rectangular walls and then translates to a grid image with sufficient spatial resolution. The wide FOV reflectance is determined by averaging the reflectance of the elements. This model has been used for studying a typical row structure crop (maize canopy) for different observation heights (from 1 m to 5m), different observation angles (from -60deg to + 60deg with step of 5deg) and for three FOV (10deg,25deg,45deg). The results show that wide FOV measurements generally underestimate the reflectance in the red domain (up to -25% of relative reflectance when a 25deg of FOV) and have overestimation in the near infrared domain (up to 10% of relative reflectance when a 25deg of FOV). For different viewing angles, the vegetation contribution is overestimated in the illumination direction and underestimated in the opposite direction; This study can be used not only to analyze that FOV effects, but also to make optimal design of observation geometry (FOV, measurement height, measurement spatial size etc.) for minimizing the measurement errors, and/or to introduce some corrections to reduce the FOV effects.
- Ask R Discovery
- Chat PDF
AI summaries and top papers from 250M+ research sources.