The multi-camera calibration is an essential step for many spatially aware applications, such as robotic navigation, augmented reality, and 3D human pose estimation. Traditional calibration methods use off-the-shelf checkerboards or triangles as the known world coordinate system and their corresponding corners are set as control points, which heavily depends on specific calibration patterns and is not suitable for calibration pattern-denied environments. In this paper, an automatic calibration method is proposed to calibrate the multi-camera system without the aid of a known calibration pattern. The key idea of the proposed method is that the authors consider the human body, which is always available, as the counterpart of the calibration pattern. The authors’ approach starts with binocular camera calibration, in which the extrinsic and intrinsic parameters are calculated in order and followed by a joint optimisation. With the results of each pair of binocular camera calibration, the multi-camera system calibration is carried out in three steps: (i) parameters initialisation, (ii) extrinsic parameters optimisation, and (iii) jointly optimising intrinsic and extrinsic parameters. Since the authors’ approach does not require additional calibration patterns except for one visible person, it is flexible and easy to be implemented. Real experiments are conducted in different scenes, camera angles, and camera settings. Human pose estimation with the multi-camera system is additionally performed for exhaustive experiments. The experimental results demonstrate that the authors’ method shows superior performance than the traditional method with the aid of a specific calibration pattern.