This study proposes an advanced algorithm for predicting the optimal orientation in human manikin 3D printing. We can print the manikin mesh data on any scale depending on the user’s needs. Once the 3D printing scale was determined, the manikin data were dissected based on the 3D printer’s maximal printing volume using our previous work. Then, we applied the newly proposed algorithm, designated as “per-pixel signed-shadow casting,” to each dissected manikin part to calculate the volumes of the object and the support structure. Our method classified the original mesh triangles into three groups—alpha, beta, and top-covering—to eliminate the need for special hardware such as graphic cards. The result is shown as a two-dimensional bitmap file, designated as “tomograph”. This tomograph represents the local support structure distribution information on a visual and quantitative basis. Repeating this tomography method for the three rotational axes resulted in a four-dimensional (4D) box-shaped graph. The optimal orientation of any arbitrary object is easily determined from the lowest-valued pixel in the 4D box graph. We applied this proposed method to several basic primitive shapes with different degrees of symmetry and complex shapes, such as the famous “Stanford Bunny”. Finally, the algorithm was applied to human manikins using several printing scales. The theoretical values were compared with those obtained from analytical or g-code-based experimental volumes.
Read full abstract