Abstract

Inverse lithography technology (ILT) is extensively used to compensate image distortion in optical lithography systems by pre-warping the photomask at the pixel scale. However, computational complexity is always a central challenge of ILT due to the big throughput of data volume. This paper proposes a dual-channel model-driven deep learning (DMDL) method to overcome the computational burden, while break through the limit of image fidelity over traditional ILT algorithms. The architecture of DMDL network is not inherited from conventional deep learning, but derived from the inverse optimization model under a gradient-based ILT framework. A dual-channel structure is introduced to extend the capacity of the DMDL network, which allows to simultaneously modify the mask contour and insert sub-resolution assist features to further improve the lithography image fidelity. An unsupervised training strategy based on auto-decoder is developed to avoid the time-consuming labelling process. The superiority of DMDL over the state-of-the-art ILT method is verified in both of the computational efficiency and image fidelity obtained on the semiconductor wafer.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call