AbstractThis paper focuses on the Light Detection and Ranging (LiDAR)–Inertial Measurement Unit (IMU) simultaneous localization and mapping (SLAM) problem: How to fuse the sensor measurement from the LiDAR and IMU to online estimate robot's poses and build a consistent map of the environment. This paper presents LTA‐OM: an efficient, robust, and accurate LiDAR SLAM system. Employing fast direct LiDAR‐inertial odometry (FAST‐LIO2) and Stable Triangle Descriptor as LiDAR–IMU odometry and the loop detection method, respectively, LTA‐OM is implemented to be functionally complete, including loop detection and correction, false‐positive loop closure rejection, long‐term association (LTA) mapping, and multisession localization and mapping. One novelty of this paper is the real‐time LTA mapping, which exploits the direct scan‐to‐map registration of FAST‐LIO2 and employs the corrected history map to provide direct global constraints to the LIO mapping process. LTA mapping also has the notable advantage of achieving drift‐free odometry at revisit places. Besides, a multisession mode is designed to allow the user to store the current session's results, including the corrected map points, optimized odometry, and descriptor database for future sessions. The benefits of this mode are additional accuracy improvement and consistent map stitching, which is helpful for life‐long mapping. Furthermore, LTA‐OM has valuable features for robot control and path planning, including high‐frequency and real‐time odometry, driftless odometry at revisit places, and fast loop closing convergence. LTA‐OM is versatile as it is applicable to both multiline spinning and solid‐state LiDARs, mobile robots and handheld platforms. In experiments, we exhaustively benchmark LTA‐OM and other state‐of‐the‐art LiDAR systems with 18 data sequences. The results show that LTA‐OM steadily outperforms other systems regarding trajectory accuracy, map consistency, and time consumption. The robustness of LTA‐OM is validated in a challenging scene—a multilevel building having similar structures at different levels. To demonstrate our system, we created a video which can be found on https://youtu.be/DVwppEKlKps.