Checkpointing is a typical approach to tolerate failures in today’s supercomputing clusters and computational grids. Checkpoint data can be saved either in central stable storage, or in processor memory (as in diskless checkpointing), or local disk space (replacing memory with local disk in diskless checkpointing). But where to save the checkpoint data has a great impact on the performance of a checkpointing scheme. Fault tolerance schemes with higher efficiency usually choose to save the checkpoint data closer to the processor. However, when failures are handled from application level, the storage hierarch of a platform is often not available at the fault tolerance scheme design time. Therefore, it is often difficult to decide which checkpointing schemes to choose at the application design time. In this paper, we demonstrate that, a good fault tolerance efficiency can be achieved by adaptively choosing where to store the checkpoint data at run time according to the specific characteristics of the platform. We analyze the performance of different checkpointing schemes and propose an efficient adaptive checkpointing scheme to incorporate fault tolerance into high performance computing applications.