Abstract

Different configurations of the laser-induced deflection (LID) technique have been developed recently to measure the absolute bulk and coating absorption of laser components directly. In order to obtain the absolute absorptance value of the surface or coating of a laser component, a reference sample with the same geometry and material as the test sample and with resist heating mounted on the surface of the reference sample was employed to calibrate LID signals. Due to the difference in the excitation approaches in measuring LID signals of the test and reference samples (laser beam irradiation versus surface resist heating), this calibration procedure may bring significant errors in the determination of the absorptance of the test sample. In this paper, theoretical models describing the temperature rise distributions within a test sample excited with flat-top beam irradiation and within a reference sample excited with surface resist heating are developed. Based on these temperature models and the finite-element analysis method, the LID signals used to determine the absorptance of the surface or coating of a laser component and the corresponding calibration error are analyzed. The computation results show that the calibration error depends largely on the probe beam position for normal or transverse LID signals and may be minimized by optimizing the probe beam geometry.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.