Deep learning-based image analysis offers great potential in clinical practice. However, it faces mainly two challenges: scarcity of large-scale annotated clinical data for training and susceptibility to adversarial data in inference. As an example, an artificial intelligence (AI) system could check patient positioning, by segmenting and evaluating relative positions of anatomical structures in medical images. Nevertheless, data to train such AI system might be highly imbalanced with mostly well-positioned images being available. Thus, we propose the use of synthetic X-ray images and annotation masks forward projected from 3D photon-counting CT volumes to create realistic non-optimally positioned X-ray images for training. An open-source model (TotalSegmentator) was used to annotate the clavicles in 3D CT volumes. We evaluated model robustness with respect to the internal (simulated) patient rotation α\\documentclass[12pt]{minimal} \\usepackage{amsmath} \\usepackage{wasysym} \\usepackage{amsfonts} \\usepackage{amssymb} \\usepackage{amsbsy} \\usepackage{mathrsfs} \\usepackage{upgreek} \\setlength{\\oddsidemargin}{-69pt} \\begin{document}$$\\alpha$$\\end{document} on real-data-trained models and real&synthetic-data-trained models. Our results showed that real&synthetic- data-trained models have Dice score percentage improvements of 3% to 15% across different α\\documentclass[12pt]{minimal} \\usepackage{amsmath} \\usepackage{wasysym} \\usepackage{amsfonts} \\usepackage{amssymb} \\usepackage{amsbsy} \\usepackage{mathrsfs} \\usepackage{upgreek} \\setlength{\\oddsidemargin}{-69pt} \\begin{document}$$\\alpha$$\\end{document} groups compared to the real-data-trained model. Therefore, we demonstrated that synthetic data could be supplementary used to train and enrich heavily underrepresented conditions to increase model robustness.