Active learning based on Bayesian optimization (BO) is a popular black-box combinatorial search method, particularly effective for autonomous experimentation. However, existing BO methods did not consider the joint variation caused by the process degradation over time and input-dependent variation. The challenge is more significant when the affordable experimental runs are very limited. State-of-the-art approaches did not address allocating limited experimental runs that can jointly cover (1) representative inputs over large search space for identifying the best combination, (2) replicates reflecting the true input-dependent testing variation, and (3) process variations that increase over time due to process degradation. This paper proposed Empirical Bayesian Hierarchical Variation Modeling in Bayesian Optimization (EHVBO) guided by the process knowledge to maximize the exploration of potential combinations in sequential experiments given limited experimental runs. The method first mitigates the process degradation effect through generalized linear modeling of grouped variations, guided by the knowledge of the re-calibration cycle of process conditions. Then, EHVBO introduces an empirical Bayesian hierarchical model to reduce the replicates for learning the input-dependent variation, leveraging the process knowledge of the common structure shared across different testing combinations. This way can reduce the necessary replicates for each input condition. Furthermore, the paper developed a heuristics-based strategy incorporated in EHVBO to improve search efficiency by selectively refining the search space over pivotal regions and excluding less-promising regions. A case study based on real experimental data demonstrates that the proposed method outperforms testing results from various optimization models.
Read full abstract