The combination of light detection and rangings (LiDARs) and cameras enables a mobile robot to perceive environments with multimodal data, becoming a key factor in achieving robust perception. Traditional frame cameras are sensitive to changing illumination conditions, motivating us to introduce novel event cameras to make LiDAR-camera fusion more complete and robust. However, to jointly exploit these sensors, the challenging extrinsic calibration problem should be addressed. This article proposes an automatic checkerboard-based approach to calibrate extrinsics between a LiDAR and a frame/event camera, where the following four contributions are presented: 1) we present an automatic feature extraction and checkerboard tracking method from LiDAR's point clouds; 2) we reconstruct realistic frame images from event streams, applying traditional corner detectors to event cameras; 3) we propose an initialization-refinement procedure to estimate extrinsics using point-to-plane and point-to-line constraints in a coarse-to-fine manner; 4) we introduce a unified and globally optimal solution to address two optimization problems in calibration. Our approach has been validated with extensive experiments on 19 simulated and real-world datasets and outperforms the state-of-the-art.