Abstract

Accurate visual measurement of micrometer-scale flying droplets in inkjet printing remains a challenge due to low image resolution caused by severe image conditions. Multi-frame super resolution (MFSR) has the potential to break through the measurement bottleneck. However, most existing MFSR methods are not satisfactory in multi-frame information utilization, especially for fast-motion scenes, and they often suffer from detail loss. In this study, focusing on multi-frame information utilization and deep feature extraction, we propose a dual pyramid multi-attention network (DPMAN). First, a dual pyramid deformable alignment (DPDA) module is proposed to deal with diverse motion, which extracts explicit offsets to enhance deformable alignment and perform coarse-to-fine alignment. Then, a gated attention fusion (GAF) module is devised to adaptively aggregate the aligned features to emphasize favorable features. Finally, a residual self-attention reconstruction (RSAR) module based on the multi-stage aggregation self-attention architecture is proposed to extract finer deep features for detail restoration. Experimental results on three benchmark datasets demonstrate that DPMAN achieves state-of-the-art performance. DPMAN is applied to droplet image reconstruction and improves the measurement accuracy from 3.34% to 2.52%.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call