Due to the inherent mismatch between rotated objects and horizontal features, feature point misalignment has been a challenge in the task of Rotated Object Detection (ROD). Specifically, considering the pattern of convolution, foreground features are often mixed with background noise, which can confuse the model and affect the model from feature point alignment during the training phase. To mitigate this issue, previous methods concentrate on fixed positions derived from predicted boxes by additionally introducing a refinement stage. However, merely learning fixed position priors during training can result in suboptimal alignment and inefficiency during inference. This paper introduces a dynamic point alignment detector to concurrently address issues associated with feature misalignment and inference inefficiency. The method is made up of two components: the fine-grained points generator (FPG) captures key information, and the point alignment module (PAM) derives precise feature representations. Both modules empower the detector with the capability to dynamically perceive rotated objects, extracting more comprehensive and reasoned feature contents from the feature maps. In general, our method enables the model to independently identify and prioritize the valuable features during the training process. Subsequently, during the inference stage, the results can be directly predicted without additional alignment operations. The experimental results demonstrate that our method can achieve competitive and superior results with average precision (AP) values of 79.33%, 95.73%, and 63.37% on the DOTA, HRSC2016, and DIOR-R datasets, respectively. Codes will be publicly available at https://github.com/Xuyihaoby/PARDet.