Pose Transfer has recently gained significant attention, particularly for its user-friendly applications in the animation industry. The primary objective is to transform a given RGB image into a new target pose. This process involves two consecutive tasks: initially, warping the image to approximately align with the target pose, and subsequently using this rough estimation to generate a photorealistic image of the input in the desired pose.The primary challenge lies in the first task, where the image undergoes a rough transformation to its new location in the target pose. Current deep learning approaches rely on first-order warping, employing an affine transformation to move all image pixels. Despite yielding promising results, this approach has significant challenges when dealing with complex deformations, mainly due to the simplistic nature of its linear function. In contrast, we suggest transferring patches using a set of correlation layers. In each layer, the warping for each pixel of the image is individually estimated. We additionally introduce a constraint aimed at minimizing the energy of second derivatives across the entire warping map of the pixels. This allows for keeping the integration of local textures following the warping process, a feature already ensured in the affine-based transformation by restricting the transition to a linear function for all the image pixels. Our approach not only preserves the integrity of local textures, akin to the affine transformation, but achieves this by individually estimating the warping for each image pixel, thereby enabling finer adjustments of the input sample to the target pose. We illustrate the superior performance of this technique compared to affine-based strategies on the renowned DeepFashion database.