Related Topics
Articles published on Field size
Authors
Select Authors
Journals
Select Journals
Duration
Select Duration
11270 Search results
Sort by Recency
- New
- Research Article
- 10.3390/s26031024
- Feb 4, 2026
- Sensors
- Lei Li + 2 more
Aiming at the problem that fault characteristics cannot be effectively expressed due to the low pixel proportion of the hot spot target and background interference when detecting hot spot faults in complex environments, a photovoltaic module hot spot fault detection method integrating U-Net and YOLOv8 is proposed. Firstly, the U-Net segmentation network is introduced to remove pseudo-high-brightness heat sources in the background and highlight the contour features of the photovoltaic panels, laying a good foundation for the subsequent photovoltaic hot spot fault detection tasks. Secondly, a detection network is built based on the YOLOv8 framework. Aiming at the problems that it is difficult to extract the hot spot features of photovoltaic panels of different sizes and to balance the reasoning speed and detection accuracy, a detection network based on deformable convolution and GhostNet is designed. Furthermore, to enhance the adaptability of the convolutional neural network to multi-scale hot spot targets, deformable convolution (DCN) is introduced into the YOLOv8 network. By adaptively adjusting the shape and size of the receptive field, the detection accuracy is further improved. Then, aiming at the issue that it is difficult to balance accuracy and speed in the detection network, the C2f_Ghost module is designed to simplify the network parameters and improve the model inference speed. To verify the effectiveness of the algorithm, a comparison is made with SSD, YOLOv5, YOLOv7, and YOLOv8. The results show that the proposed algorithm can accurately detect hot spot faults, with an accuracy of up to 88.5%.
- New
- Research Article
- 10.1097/hp.0000000000002045
- Feb 2, 2026
- Health physics
- H Sekkat + 5 more
This study establishes a robust and clinically applicable calibration protocol for optically stimulated luminescence dosimeters (OSLDs) in diagnostic radiology, with the aim of improving the accuracy of patient dose assessment. A total of 144 OSLDs were systematically irradiated under controlled conditions to assess their dosimetric response across a wide range of tube voltages (40-150 kVp) and square field sizes (10 × 10 cm² to 30 × 30 cm²). The dosimeters exhibited a sensitivity variation of ±6.6%, with an average background dose of 0.0185 mGy. The experimental data revealed a high dependence of OSLD response on photon energy, with dose values increasing by a factor of 11.5, from 0.1393 mGy at 40 kVp to 1.6072 mGy at 150 kVp for a constant field size of 10 × 10 cm². A pronounced non-linear dose escalation was observed in the mid-kVp range (70-100 kVp), where dose measurements increased by 72-90% as field size expanded. Energy and geometry-specific correction factors were derived, showing significant variation with field size, reaching maximum values of 9.81 for the 30 × 30 cm² field at 150 kVp and 7.43 for the 10 × 10 cm² field under the same conditions. Additionally, notable discrepancies were observed between experimentally derived effective beam energies and reference values reported by the International Atomic Energy Agency (IAEA), highlighting the need for localized calibration standards. These findings contribute to the standardization of OSLD calibration protocols in diagnostic radiology and support their implementation for accurate patient dose monitoring in clinical settings.
- New
- Research Article
- 10.1080/10420150.2026.2620103
- Jan 29, 2026
- Radiation Effects and Defects in Solids
- Karim Bahhous + 5 more
This study aims to reproduce therapeutic dose distributions in small-field conditions for an Elekta Synergy system using Monte Carlo (MC) simulations. The Fluka and Geant4 codes were employed to simulate the Linac head. Output factors, percentage depth doses (PDDs), and off-axis dose profiles were computed for field sizes ranging from 2 × 2 cm2 to 20 × 20 cm2, including an asymmetric field size of 8 × 8 cm2 (3 + 5 cm in one axis, 4 + 4 cm in the other), in water with a source surface distance of 90 cm. Analysis of the beam spectrum generated by the linac treatment head demonstrated strong agreement between the beam parameters for both MC codes. The calculated output factors at a depth of 10 cm in water agreed within 1.52% with measured values for both MC codes. Validation of the beam accuracy was conducted by comparing the MC calculated PDDs and dose profiles with measured data via gamma index analysis. Over 95% of the points for all simulations met the stringent acceptability criteria of 2%/2 mm.
- New
- Research Article
- 10.1162/imag.a.1111
- Jan 29, 2026
- Imaging neuroscience (Cambridge, Mass.)
- Charlotte A Leferink + 2 more
Humans are experts at scene perception, for example, for recognizing familiar places or navigating new spaces. Scene representations in the brain are often thought of as global or large-scale, suggesting that scenes are represented by coarse-grained or holistic features. Recent work has shown that high spatial frequency or fine-grained visual information is also represented within scene-selective areas, which challenges the idea of strictly low-frequency, gist-like representations in those regions. Here, we explore whether these contrasting views can be explained by the size of the population receptive fields (pRFs) within two scene-selective areas: the parahippocampal place area (PPA) and occipital place area (OPA). Our results show that both the PPA and the OPA contain voxels with a variety of receptive field sizes, which follow a gradient from large to small along the anterior-posterior axis. This organization would predict scene representations in anterior PPA/OPA to be dominated by low spatial frequencies and scene representations in posterior PPA/OPA by high spatial frequencies. We find the opposite pattern when decoding scene categories of spatial frequency-filtered images. Our results indicate that preferred scene feature representations are transformed along the visual hierarchy, adhering closely to the expected correspondence between spatial frequency preferences and pRF size in early visual areas, but demonstrably less so in high-level visual brain regions.
- New
- Research Article
- 10.46586/tches.v2026.i1.161-184
- Jan 16, 2026
- IACR Transactions on Cryptographic Hardware and Embedded Systems
- Tommaso Pegolotti + 3 more
Universal hash functions are a widely-used, fundamental building block in constructing more complex cryptographic schemes. This makes achieving high efficiency, both at the design and implementation level, an utmost priority. Using simple polynomial hash functions over prime fields is a popular choice; Poly1305 is a particular instance of such an approach that is standardized and widely deployed. However, even for simple polynomial hash functions, there are significant challenges in designing fast implementations. Firstly, there is a large set of choices for algorithmic parameters such as finite field and limb sizes. Secondly, the complexity and diversity of modern vector instruction set architectures (ISAs) makes performance evaluation, and subsequent parameter selection difficult. In this paper we present SPHGen, a program generator for simple polynomial hash functions. SPHGen takes as input the field parameters and outputs highly optimized code for a given vector ISA. The generated code is automatically verified by means of symbolic execution, ensuring functional correctness. Accompanying SPHGen is an accurate model that predicts the runtime of each generated program. Using SPHGen, one can readily identify the Pareto front of Pareto-optimal hash function parameters w.r.t. the security-performance trade-offs, and, when using the model, even without running any code. SPHGen and the model can be retargeted to different vector ISAs and languages; we consider AVX2, AVX512, AVX512_IFMA, and Jasmin as examples. We generate Jasmin code to ensure memory safety and constant-time execution. We report benchmarks showing that SPHGen offers significant performance improvements over the best previous non-vectorized code. In addition, for large messages, our automatically generated code offers speedups of up to 37% compared to the highly-optimized implementation of Poly1305 in OpenSSL, which is hand-coded in assembly.
- New
- Research Article
- 10.1049/cit2.70097
- Jan 16, 2026
- CAAI Transactions on Intelligence Technology
- Hui Zong + 5 more
ABSTRACT With the advancement of satellite remote sensing technology, object detection based on high‐resolution remote sensing imagery has emerged as a prominent research focus in the field of computer vision. Although numerous algorithms have been developed for remote sensing image object detection, they still suffer from challenges such as low detection accuracy and high false positive rates. To address these issues, we propose a novel architecture, the multiscale feature fusion network (MSFFNet). MSFFNet is composed of three key components: the Large Selective Kernel Block (LSKBlock), the Space‐to‐Depth ADown (SPDA) module and the Double Feature Aggregation Neck (DFAN). Specifically, the LSKBlock adaptively captures salient target features by dynamically adjusting the receptive field size, thereby enhancing detection precision. The SPDA module converts spatial correlations into channel‐wise dependencies by segmenting and reordering the feature maps, which helps preserve fine‐grained information, suppress background interference and reduce false detections. Furthermore, the DFAN integrates shallow and deep features through a multiscale feature fusion module (MSFFM), enabling the extraction of multiscale target representations and improving overall detection performance. Extensive experiments on public datasets, SIMD, VisDrone2019 and DIOR, demonstrate the effectiveness of our approach. Compared with the YOLOv9s baseline model, MSFFNet achieves improvements in mAP50% of 0.6%, 1.9% and 3.5%, respectively.
- New
- Research Article
- 10.1167/jov.26.1.10
- Jan 15, 2026
- Journal of Vision
- Jacob Coorey + 2 more
Continuous flash suppression (CFS) is a variant of interocular conflict that occurs when one eye views a dynamic high-contrast mask that increases the duration of target suppression. A variant of CFS known as tracking continuous flash suppression (tCFS) was developed, allowing the depth of interocular suppression to be measured. Although previous research has measured how the duration of suppression may be modulated by the contrast and size of the masking stimulus, no study has assessed how mask features impact suppression depth. In our first study, we manipulated mask contrast to measure the consequent impact on suppression depth as measured by the tCFS procedure. We observed that high mask contrast increased the threshold required for a target to break into awareness. Critically, the decrease in contrast required to re-suppress each target was proportionately the same across all conditions so that suppression depth—the ratio of the two thresholds—remained constant. In the second experiment, we manipulated the size of the masking stimulus and found no change in breakthrough/suppression thresholds or suppression depth (i.e., the difference between the thresholds when using log-contrast). These findings clarify that, although changes in mask contrast may alter the threshold to enter awareness, there is no overall change in suppression depth as the changes in breakthrough threshold are reflected by proportionately equivalent changes in suppression threshold. This result matches findings obtained with binocular rivalry showing that suppression depth is constant despite changes in stimulus contrast. Differing levels of mask contrast and size, therefore, can be used by researchers in CFS without altering the strength of suppression, consistent with the perspective that interocular suppression operates in small local spatial zones determined by receptive field size in the primary visual cortex.
- New
- Research Article
- 10.1002/acm2.70331
- Jan 14, 2026
- Journal of Applied Clinical Medical Physics
- Ioannis A Tsalafoutas + 2 more
PURPOSEThe exposure index (EI), the target exposure index (EIT), and the deviation index (DI) have been defined in the IEC Standard 62494‐1 Ed.1 2008‐08. This study investigates the impact of certain acquisition parameters, the imaged anatomy, and the manufacturer's specificities on the EI of radiological images and how these may affect EIT setting procedure.METHODSImages were acquired using two digital radiography (DR) systems of two different manufacturers, using aluminum attenuators and an anthropomorphic phantom. Acquisition parameters like the tube potential (kVp), the tube loading (mAs), the exposure time, the automatic exposure control (AEC) system settings (sensor and dose level selection), the grid (with or without), the additional filtration, the field size, and the imaged anatomy were varied and their effect on the EI was quantified separately for each system.RESULTSEI is linearly related to the incident air kerma (IAK) on the detector as expected (by definition). For constant IAK, EI increases with increasing kVp. While EI in general is reduced in the presence of scatter, this may not always be the case. Under AEC operation, even the exposure time can make a difference. EI is strongly affected by the imaged anatomy in combination with the AEC sensor and field size selections, the examination protocol, and the manufacturer.CONCLUSIONSMany parameters affect the EI calculation apart from IAK. Among them, the most important are the imaged anatomy and the manufacturer. Since the EI calculation is a complex procedure, setting of the EIT values should be done with caution on a per‐examination and manufacturer basis, since the values that apply for one digital system are not always applicable to another. Furthermore, when EI is used as an image quality tool, a DI variation of at least ±2 should be allowed before a possibly meaningful red flag is activated.
- Research Article
- 10.1002/acm2.70460
- Jan 7, 2026
- Journal of Applied Clinical Medical Physics
- Carlos Ferrer + 4 more
BackgroundPlastic scintillation detectors (PSD) are widely used for detecting and measuring ionizing radiation. These detectors are versatile, with high efficiency, fast response and the ability to provide real‐time measurements.PurposeEvaluate the suitability of Blue Physics PSD (BP‐PSD) for performing ultra‐fast dosimetric commissioning measurements with high accuracy and precision in a very short time.MethodsUltra‐fast measurements were performed in water using a BP‐PSD on an Elekta Unity MR‐linac. Percentage depth doses (PDD) and profiles at different depths were measured at two movement velocities, 10 mm/s and 20 mm/s, for field sizes ranging from 10 × 10 cm2 to 1 × 1 cm2. Gamma analysis was conducted to compare these measurements with those obtained during machine commissioning using a PTW Semiflex 3D ionization chamber (for PDD) and a PTW micro‐Diamond detector (for PDD and profiles). Gamma criteria of 2%/2 mm and 1%/1 mm dose difference/distance to agreement were studied, alongside field size, penumbra, and measurement time.ResultsAll PDD and profile gamma passing rates were 100% at 2%/2 mm. At the stricter 1%/1 mm criteria, all PDD showed a passing rate above 96.97% for both velocities, with most of the profiles exceeding 95% at 10 mm/s and 90% at 20 mm/s. Gamma analysis results were superior for smaller fields (1 × 1 cm2 and 2 × 2 cm2) and generally better at 10 mm/s. On average, the penumbra measurements obtained with the PSD were greater than those achieved with the micro‐Diamond detector. Measurement times were found to be between 7 and 14 times shorter for PDD, and between 5 and 9 times shorter for profiles at speeds of 10 mm/s and 20 mm/s, respectively.ConclusionsUltra‐fast measurements using the Blue Physics PSD are suitable for acquiring dosimetric commissioning data with high accuracy and precision, and can be performed in a much shorter timeframe than with commonly used detectors.
- Research Article
- 10.1002/ldr.70432
- Jan 4, 2026
- Land Degradation & Development
- Yu Zhang + 6 more
ABSTRACT Differences in farmland scale led to variations in agricultural practices and management, which in turn influence the direction and rate of changes in soil organic matter (SOM). This study collected 574 topsoil samples (0–20 cm) from the Youyi Farm in a typical black soil region of Northeast China. Cloud‐free Landsat images from 1984 to 2023 were obtained via Google Earth Engine and bare soil images were synthesized in 10‐year intervals. The study area was classified using the K‐means clustering algorithm to construct a two‐cluster probabilistic hybrid model, enhancing the accuracy of SOM predictions. Finally, SOM spatial distribution data were obtained for each 10‐year period to evaluate the impact of different farmland scales on SOM variation. The results showed that: (1) using a probabilistic hybrid model effectively improved the prediction performance of SOM, with R 2 reaching 0.71, RMSE at 0.76% and RPD at 1.96. (2) Over the past 40 years, SOM content at Youyi Farm has shown an overall downward trend, with the average SOM content decreasing from 3.57% ± 0.65% to 3.51% ± 0.58%. Negative changes in SOM were observed in 67.15% of the farmland. (3) SOM decreased most slowly when field sizes ranged from 180 to 210 ha in the study area, as both excessively large and excessively small farmland scales accelerated SOM decline. Future conservation of black soil and intensive agricultural land use should consider rational planning of farmland scale.
- Research Article
- 10.31557/apjcp.2026.27.1.123
- Jan 1, 2026
- Asian Pacific journal of cancer prevention : APJCP
- Elsayed M Alashkar + 4 more
After radioactivity and X-rays were first discovered, researchers found that radiation could harm cells by damaging their internal structures, with cancer cells particularly vulnerable to these effects. Today, various advanced machines and methods are employed to enhance the precision of radiation delivery to tumors. Monte Carlo simulation is an excellent method for predict the dose distributions under certain conditions. Therefore, in this study, we aimed to investigate the field size effect on the central dose distributions (PDD Curves), and the lateral profiles of the dose. MC codes, i)- MCBEAM code, ii)- MCSIM code, and iii) MCSHOW code used to simulate 6MV photon beams with 5 different field sizes (10x10cm, 5x5cm, 8x8cm, 15x15cm, and 20x20cm). PDDs and profile curves were compared for each field size. Smaller field sizes (e.g., 5×5 cm) exhibited a lower surface dose compared to larger fields, and the depth of maximum dose dmax shifts slightly deeper as field size increases due to increased scatter contributions. Larger fields (15×15 cm, 20×20 cm) demonstrated a slower dose falloff at deeper depths compared to smaller fields. Monte Carlo calculations confirms that field size significantly impacts PDD curves and affects surface dose. This agreement encouraged us to research with these codes to improve treatment techniques in radiotherapy.
- Research Article
- 10.1016/j.biortech.2025.133288
- Jan 1, 2026
- Bioresource technology
- Muhammad Umer Arshad + 6 more
Optimizing bioenergy biofuel harvest: a comparative analysis of stepwise and integrated methods for economic and environmental sustainability.
- Research Article
- 10.1155/er/4243861
- Jan 1, 2026
- International Journal of Energy Research
- Saedaseul Moon + 1 more
The discovery of natural hydrogen—often called gold or white hydrogen—has attracted attention as a potential game‐changer in the hydrogen economy. This study develops a hypothetical economic assessment model for natural hydrogen exploration and production (E&P) projects, built on proxy cost data from oil and natural gas projects due to the absence of commercial‐scale cases. The model assumes all produced hydrogen is converted into electricity for sale via existing grids, incorporates renewable energy certificates (RECs) to reflect environmental value, and applies historical average electricity and REC prices. Using a comparative analogy approach, the analysis evaluates project viability under various scenarios, including changes in power generation efficiency, electricity and REC prices, costs, and field size. Results indicate that REC subsidies are essential for viability at median field size, though projects remain feasible with reduced subsidies. Revenue‐related factors—particularly generation efficiency and electricity prices—have greater impact than cost factors. Economies of scale substantially improve feasibility, enabling large fields to operate profitably without subsidies. Compared with other hydrogen production methods, natural hydrogen offers the lowest estimated cost and relatively low greenhouse gas (GHG) emissions. These findings suggest that policy support, such as differentiated REC incentives based on well size, can encourage investment, while technological improvements in generation efficiency should be prioritized over upstream cost reductions. However, the model is based on simplifying assumptions and may not capture cost variations from unique geophysical characteristics of natural hydrogen reservoirs or regional market constraints. As empirical data from real projects become available, the model’s assumptions should be refined to improve accuracy and applicability.
- Research Article
- 10.1063/5.0300770
- Jan 1, 2026
- Physics of Fluids
- Lixin Shen + 5 more
To investigate the influence of initial instability waves on the secondary atomization of a rotating conical liquid sheet, this study adopted a numerical simulation method. By varying the initial velocity of the liquid sheet to simulate the initial instability waves, the effects of sinusoidal fluctuating velocities under different frequencies and amplitudes on secondary atomization were investigated. The research demonstrates that the characteristic parameters of secondary atomization undergo regular changes with increases in the frequency and amplitude of the initial velocity fluctuations of the liquid sheet. Furthermore, the direction of these changes (increase or decrease) periodically reverses depending on the axial position or the moment of observation. The initial velocity fluctuations of the liquid sheet form ring-shaped droplet-dense regions along the axis of the spray field. Variations in frequency and amplitude lead to changes in the droplet number distribution within each droplet-dense region, as well as alterations in the droplet distribution on the inner and outer sides of the spray cone in the radial direction. In addition, the influence of the velocity fluctuation frequency on the secondary atomization characteristics and the spatial distribution characteristics of the droplets is significantly greater than that of the amplitude. This study provides insights for regulating the droplet size and spatial distribution characteristics of the spray field, contributing to reliable prediction and control methods for the design of high-performance advanced combustion chamber nozzles.
- Research Article
- 10.1016/s1003-6326(25)66958-5
- Jan 1, 2026
- Transactions of Nonferrous Metals Society of China
- Ling-Hui Meng + 5 more
Numerical model for rapid prediction of temperature field, mushy zone and grain size in heating-cooling combined mold (HCCM) horizontal continuous casting of C70250 alloy plates
- Research Article
- 10.15294/jpehs.v12i2.39824
- Dec 31, 2025
- Journal of Physical Education Health and Sport
- Syahrul Ghazi Parasutama + 2 more
This study aims to analyze the effect of small-sided games training with different field sizes on the dribbling skills of 15-year-old soccer athletes. The study used a quantitative approach with a pretest–posttest control group experimental design. The study subjects were 20 15-year-old soccer athletes who were randomly divided into an experimental group and a control group with an equal number. The experimental group was given small-sided games training with varying field sizes, while the control group underwent small-sided games training without varying field sizes. Dribbling skills were measured using a zig-zag dribbling test at the pretest and posttest stages. Data were analyzed using descriptive statistics as well as prerequisite tests and independent t-tests. The results showed that small-sided games training with different field sizes provided more effective dribbling skill improvements than training without varying field sizes. Variations in field sizes created more diverse technical and situational demands, thus encouraging more optimal adaptation of ball control, agility, and player decision-making. These findings confirm that field size adjustment is an important factor in the application of small-sided games in the development of adolescent soccer athletes.
- Research Article
- 10.1177/1088467x251401374
- Dec 22, 2025
- Intelligent Data Analysis: An International Journal
- Junyao Kuang + 1 more
Remote sensing Change Detection (CD) has advanced significantly with the adoption of Convolutional Neural Networks (CNNs) and Transformers. While CNNs provide robust feature extraction capabilities, they are limited by their receptive field size, and Transformers are constrained by quadratic computational complexity when handling long sequences, impacting scalability. The Mamba architecture offers a compelling alternative with its linear complexity and high parallelism; however, its intrinsic 1D processing structure results in a loss of spatial information in 2D vision tasks. This paper proposes an efficient framework employing a Vision Mamba variant that enhances the ability to capture 2D spatial information while maintaining Mamba’s hallmark linear complexity. The framework utilizes a 2DMamba encoder to effectively learn global spatial contextual information from multi-temporal images. For feature fusion, we introduce a 2D scan-based, channel-parallel scanning strategy coupled with a spatio-temporal feature fusion method. This approach adeptly captures both local and global change information, addressing spatial discontinuity issues during fusion. In the decoding phase, we present a feature change flow-based decoding method that enhances the mapping of feature change information from low-resolution to high-resolution feature maps, thus mitigating feature shift and misalignment. Extensive experiments on benchmark datasets such as LEVIR-CD+ and WHU-CD demonstrate the competitive performance of our framework compared to state-of-the-art methods, highlighting the significant potential of Vision Mamba for efficient and accurate remote sensing change detection.
- Research Article
- 10.1002/mp.70227
- Dec 21, 2025
- Medical physics
- Masashi Yamanaka + 7 more
Fluoroscopic-gated proton therapy (FGPT) enables precise dose delivery to tumors affected by respiratory motion by tracking internal fiducial markers and delivering the proton beam when the marker is within a gating window. However, scatter radiation from fluoroscopic x-rays may be detected by the dose monitor (DM) and mistakenly counted as proton monitor units (MU). To mitigate this issue, proton beam delivery is typically interrupted during the fluoroscopy pulse, a method known as interrupted continuous delivery (ICD). If the contamination from scattered fluoroscopic x-rays is sufficiently low compared to the proton beam current, uninterrupted continuous delivery (UCD), in which fluoroscopic x-rays are delivered concurrently with proton beams, may be feasible. UCD can enhance beam stability and improve treatment efficiency in particular for synchrotron-based proton therapy systems using continuous beams. This study aimed to measure the contamination of scattered fluoroscopic x-rays in DM and to evaluate the dose distribution when fluoroscopic x-rays and proton beams are delivered simultaneously in FGPT. x-ray contamination was measured using a proton therapy system equipped with the FGPT system. Fluoroscopic x-rays were delivered for 30 s under both in-air and solid phantom conditions, with thicknesses of 10 and 30cm, and DM counts were recorded. The dose rate was evaluated under various fluoroscopic conditions, both with and without the range shifter and mini ridge filter. To assess the dosimetric impact, six virtual treatment plans were created using anthropomorphic phantoms for cases involving the prostate, lung (both superficial and deep), liver (both superficial and deep), and pancreas. Scatter fluoroscopic x-ray contamination was measured at the beam angles specified in the treatment plans under clinical setup conditions. Dose distributions were recalculated assuming simultaneous delivery with proton beam currents of 0.1, 1, 2, 4, 8, and 40 MU/s, and these were compared to the treatment plans. The scattered fluoroscopic x-ray contamination in the DM increased with higher tube voltage, tube current, frame rate, and field size. The maximum dose rate was 0.247 MU/s with 125kV tube voltage, 80mA tube current, 30s-1 frame rate, 19 × 19cm2 field size, and solid phantom of 10cm. The dose rate of scattered fluoroscopic x-rays peaked at a solid phantom thickness of approximately 5cm. In an anthropomorphic phantom, scattered fluoroscopic x-ray contamination varies based on anatomical site and beam angle. Dose evaluations indicated that if the proton beam current was ≥2MU/s at the target and ≥1MU/s at organs at risk, the differences in dose metrics were within 1% compared to the treatment plans. These beam currents are achievable with clinical systems. Scattered fluoroscopic x-rays were confirmed to contaminate the DM and be counted as proton MUs. However, under clinically realistic beam current conditions, their impact on dose distribution in UCD FGPT was negligible. These findings support the feasibility of implementing UCD FGPT.
- Research Article
- 10.53862/jupeten.v5i2.006
- Dec 19, 2025
- Jurnal Pengawasan Tenaga Nuklir
- Dita Kusumaningrum + 3 more
Lumbar vertebrae examinations for Low Back Pain (LBP) cases are often performed with various collimation field settings. It is important to evaluate the effects of different collimation areas on dose and image quality. This research method was carried out by taking several phantom images with varying collimation field sizes: 42cmx36cm, 42cmx30cm, and 42cmx24cm. The radiation dose was recorded and analyzed, as was the quality of the phantom images. The quality of the phantom images was analyzed for Signal-to-Noise Ratio (SNR) and Contrast-to-Noise Ratio (CNR) using ImageJ software. The image analysis results showed the highest SNR and CNR at a collimation area of 42 cm x 24 cm, with a decrease in the Dose Area Product (DAP) at smaller collimation areas. This collimation area was then used for the optimization process during AP and lateral projection lumbar vertebrae examinations. This study used data from 30 patients before and after optimization, with the criteria of adult patients aged 15 years or older. An image audit was carried out by recording the collimation area and the measured dose on the resulting image. The local Diagnostic Reference Level (DRL) values for AP projections before and after optimization were 96.11 µGy.m2 and 74.40 µGy.m2. The local DRLs for lateral projections before and after optimization were 187.35 µGy.m2 and 172.99 µGy.m2, respectively. The use of the collimation area after optimization affected the implementation of local DRL for lumbar vertebrae examination, resulting in decreases of 22.58% for AP projections and 7.66% for lateral projections. Limiting the collimation area in lumbar vertebrae examinations can reduce the radiation dose received by patients, making it one of the optimization steps that can be taken. Keywords: Collimation field, Dose Optimization, Diagnostic Reference Level (DRL), Lumbal Vertebrae
- Research Article
- 10.1002/acm2.70371
- Dec 18, 2025
- Journal of Applied Clinical Medical Physics
- Morgan Healy + 2 more
BackgroundStereotactic ablative radiotherapy (SABR) is a technique developed for delivery of high doses of radiation to target volumes. Standard delivery methods for SABR include dynamic conformal arc therapy (DCAT) and volumetric modulated arc therapy (VMAT) as they allow for meaningful gains in delivery speed and in some instances sparing of normal tissues compared to conventional 3D planning. However, these techniques require complex treatment planning system (TPS) algorithms, as well as sophisticated irradiation methods. As a result, verification of the planned dose distribution prior to treatment is still standard procedure in most clinical settings. Because of the complex nature of SABR VMAT fields, a full 3D dose matrix is advantageous for plan verification. This 3D dose matrix can be obtained with the OCTAVIUS 4D system, associated software, and 1000SRS array.PurposeThe aim of this study was to compare the dosimetric characteristics of the OCTAVIUS 1000SRS array pre‐ and post‐upgrade and to evaluate any improvements in the array's performance post‐upgrade.MethodsThe array's performance was tested post‐upgrade and results were compared to pre‐upgrade measurements acquired five years prior at commissioning. This study evaluated the calibration of the central chamber, relative calibration of peripheral chambers, water equivalent depth of the effective point of measurement (EPOM), signal leakage, dose rate linearity, output factors for field sizes ranging from 1.0 × 1.0 to 10.0 × 10.0 cm2, and gamma analysis passing rates for ten lung and liver SABR plans.ResultsThe EPOM and absorbed dose calibration of the array under reference conditions remain unchanged, but signal leakage for all chambers and relative calibration of off‐axis chambers has improved after the repair. The workflow for VMAT deliveries initially involved multiple calibrations with the appropriate calibration file being selected for each measurement based on the average dose rate of the plan. The array exhibits such improved dose rate linearity post‐upgrade that a single calibration file at 1000 MU/min is now sufficient as the array has a response variation of < ± 0.4% across the range of dose rates expected in clinical plans (700–1300 MU/min). As the dose rate in the measured plane decreases with decreasing field size, other studies have suggested using field size dependent output factor corrections, though our output factor measurements showed agreement < 0.95% with the Monaco treatment planning system for field sizes ≥ 1.5 × 1.5 cm2 and we find this is not required. There was also an increase of 9.2% in the gamma analysis passing rates for clinical deliveries using a 2%/1 mm criteria (3D gamma, global dose, and 10% threshold).ConclusionsThe replaced cable and upgraded front foil and detector field have had a positive impact on the dosimetric performance of the 1000SRS array. The unaffected array housing and EPOM means no change is required for positioning and measurement set‐up. The absorbed dose determination under reference conditions has not been impacted but improved dose‐rate linearity, relative calibration of peripheral chambers, and signal leakage mean measurements under non‐reference conditions such as a VMAT delivery are more accurate than before.