The “fast iterative shrinkage-thresholding algorithm,” a.k.a. FISTA, is one of the most well known first-order optimization scheme in the literature, as it achieves the worst-case $O(1/k^2)$ optimal convergence rate for objective function value. However, despite such an optimal theoretical convergence rate, in practice the (local) oscillatory behavior of FISTA often damps its efficiency. Over the past years, various efforts have been made in the literature to improve the practical performance of FISTA, such as monotone FISTA, restarting FISTA, and backtracking strategies. In this paper, we propose a simple yet effective modification to the original FISTA scheme which has two advantages: It allows us to (1) prove the convergence of generated sequence and (2) design a so-called lazy-start strategy which can be up to an order faster than the original scheme. Moreover, we propose novel adaptive and greedy strategies which further improve the practical performance. The advantages of the proposed schemes are tested through problems arising from inverse problems, machine learning, and signal/image processing.