Abstract
This study proposes an image enhancement detection technique based on Adltformer (Adaptive dynamic learning transformer) team-training with Detr (Detection transformer) to improve model accuracy in suboptimal conditions, addressing the challenge of detecting cattle in real pastures under complex lighting conditions—including backlighting, non-uniform lighting, and low light. This often results in the loss of image details and structural information, color distortion, and noise artifacts, thereby compromising the visual quality of captured images and reducing model accuracy. To train the Adltformer enhancement model, the day-to-night image synthesis (DTN-Synthesis) algorithm generates low-light image pairs that are precisely aligned with normal light images and include controlled noise levels. The Adltformer and Detr team-training (AT-Detr) method is employed to preprocess the low-light cattle dataset for image enhancement, ensuring that the enhanced images are more compatible with the requirements of machine vision systems. The experimental results demonstrate that the AT-Detr algorithm achieves superior detection accuracy, with comparable runtime and model complexity, reaching 97.5% accuracy under challenging illumination conditions, outperforming both Detr alone and sequential image enhancement followed by Detr. This approach provides both theoretical justification and practical applicability for detecting cattle under challenging conditions in real-world farming environments.
Published Version
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.