Autonomous navigation in farmlands is one of the key technologies for achieving autonomous management in maize fields. Among various navigation techniques, visual navigation using widely available RGB images is a cost-effective solution. However, current mainstream methods for maize crop row detection often rely on highly specialized, manually devised heuristic rules, limiting the scalability of these methods. To simplify the solution and enhance its universality, we propose an innovative crop row annotation strategy. This strategy, by simulating the strip-like structure of the crop row's central area, effectively avoids interference from lateral growth of crop leaves. Based on this, we developed a deep learning network with a dual-branch architecture, InstaCropNet, which achieves end-to-end segmentation of crop row instances. Subsequently, through the row anchor segmentation technique, we accurately locate the positions of different crop row instances and perform line fitting. Experimental results demonstrate that our method has an average angular deviation of no more than 2°, and the accuracy of crop row detection reaches 96.5%.