Fast pose estimation (PE) plays a vital role for agile autonomous robots in successfully carrying out their tasks. While Global Navigation Satellite Systems (GNSS) such as Global Positioning System (GPS) have been traditionally used along with Inertial Navigation Systems (INS) for PE, their viability is compromised in indoor and urban environments due to their low update rates and inadequate signal coverage. Visual-Inertial Odometry (VIO) is gaining popularity as a practical alternative to GNSS/INS systems in GNSS-denied environments. Among various VIO-based methods, the Multi-State Constraint Kalman Filter (MSCKF) has garnered significant attention due to its robustness, speed and accuracy. Nevertheless, high computational cost of image processing is still challenging for real-time implementation on resource-constrained vehicles. In this paper, an enhanced version of the MSCKF is proposed. The proposed approach differs from the original MSCKF in the feature marginalization and state pruning steps of the algorithm. This new design results in a faster algorithm with comparable accuracy. We validate the proposed algorithm using both an open-source dataset and real-world experiments. It is demonstrated that the proposed Fast-MSCKF (referred to as FMSCKF) is approximately six times faster and at least 20% more accurate in final position estimation compared to the standard MSCKF.