Several optimization schemes have been known for convex optimization problems. A significant progress to go beyond convexity was made by considering the class of functions representable as difference of convex functions which constitute the backbone of nonconvex programming and global optimization. In this article, we introduce new algorithm to minimize the difference of a continuously differentiable function and a convex function that accelerate the convergence of the classical proximal point algorithm. We prove that the point computed by proximal point algorithm can be used to define a descent direction for the objective function evaluated at this point. Our algorithms are based on a combination of proximal point algorithm together with a line search step that uses this descent direction. Convergence of the algorithms is proved and the rate of convergence is analyzed under the strong Kurdyka–Łojasiewicz property of the objective function.