Lane detection plays a crucial role in autonomous driving technology, and there have been many recent advancements in 3D lane detection methods. One common approach is simplifying the problem by transforming images into Bird's Eye View (BEV) space. However, existing methods still have limitations, particularly in accurately identifying lanes in many complex autonomous driving scenarios, such as slopes and extreme weather conditions. Therefore, this paper proposes a dynamically updated lane detection method called DynamicallyLane. This method utilizes ConvNeXt V2-N deformations as the backbone for feature extraction, employs dynamic learnable encoding and deformable attention mechanisms, achieves precise conversion from the frontal view to BEV through a novel representation of 3D reference points, and utilizes the Enhanced BEV Features module for feature fusion, obtaining richer and more semantically informative feature representations to facilitate the model learning process better. Experimental results demonstrate that DynamicallyLane achieves an F-score of 56.2 % on the OpenLane dataset and performs excellently on the Apollo 3D Synthetic dataset. Our code is available at https://github.com/Tafble/lane_detection.