Modern machine learning has the potential to fundamentally change the way bioprocesses are developed. In particular, horizontal knowledge transfer methods, which seek to exploit data from historical processes to facilitate process development for a new product, provide an opportunity to rethink current workflows. In this work, we first assess the potential of two knowledge transfer approaches, meta learning and one-hot encoding, in combination with Gaussian process (GP) models. We compare their performance with GPs trained only on data of the new process, that is,local models. Using simulated mammalian cell culture data, we observe that both knowledge transfer approaches exhibit test set errors that are approximately halved compared to those of the local models when two, four, or eight experiments of the new product are used for training. Subsequently, we address the question whether experiments for a new product could be designed more effectively by exploiting existing knowledge. In particular, we suggest to specifically design a few runs for the novel product to calibrate knowledge transfer models, a task that we coin calibration design. We propose a customized objective function to identify a set of calibration design runs, which exploits differences in the process evolution of historical products. In two simulated case studies, we observed that training with calibration designs yields similar test set errors compared to common design of experiments approaches. However, the former requires approximately four times fewer experiments. Overall, the results suggest that process development could be significantly streamlined when systematically carrying knowledge from one product to the next.