Printed Circuit Board (PCB) design reconstruction is essential for addressing part obsolescence, intellectual property recovery, compliance, quality assurance, and enhancing national capabilities. Traditional methods for PCB design extraction, both non-geometry-based and geometry-based, have limitations in accuracy, efficiency, and scalability. This paper presents an automated approach, combining image processing and machine learning, to achieve 3D semantic segmentation of PCB X-ray Computed Tomography (X-ray CT) images and subsequent netlist extraction. By employing a 3D U-Net architecture with a ResNet-18 backbone and training on synthetic data, we introduce a first-of-its-kind method for direct 3D semantic segmentation, significantly improving over previous efforts. Our approach eliminates the need for extensive labeled datasets by using inherently labeled synthetic data. Further, this method enhances ease of segmentation by significantly reducing or eliminating the preprocessing effort required for 2D image stacks. It also improves universality by expanding the scope of application beyond images with specific 2D stack criteria, segmenting the 3D image in its entirety. Additionally, this method enables the processing of images of PCBs that have undergone bending, which is common among PCBs with a thickness below a certain threshold. The implications of this approach extend beyond PCBs, finding applications in various physical and biological sciences where 3D image segmentation is crucial. This methodology includes high-resolution 3D imaging, watershed segmentation, machine learning-based semantic segmentation, and netlist extraction. Validation with both synthetic and real-world PCB datasets shows high accuracy and robustness, offering a scalable solution for PCB design reconstruction.
Read full abstract