One of the most important parameters in the design and implementation of drip irrigation systems is the accurate prediction of the wetting dimensions pattern around the emitters, which leads to the precise determination of the distance between the emitters and the drippers. This search provides a comprehensive experimental and computational investigation to predict accurate wetting patterns (dimensions and area) in the layered-textural soil profiles under pulse drip irrigation. To achieve this first 1217 sets of experiments, including the physical and hydrometric properties of two-layered soil profiles under plus drip irrigation, were carried out at Kurdistan University, Iran. Then, a new hybrid deep learning (DL) approach comprised of Boruta extreme gradient boosting (Boruta-XGB) feature selection incorporated with the bidirectional gated recurrent unit (Bi-GRU) scheme was designed to predict the diameter of horizontal distribution (D), downward vertical distribution (V), and wetting area below the emitter (A). For each scenario, the most influential predictors in terms of input combinations (C1, C2, and C3) were extracted among nine available inputs using Boruta-XGB, employed in the Bi-GRU model. The multilayer perceptron neural network (MLP) and adaptive boosting tree (Aadaboost) were also developed as benchmark comparing models based on several metrics (e.g., correlation coefficient (R), root mean square error (RMSE), and Kling-Gupta efficiency (KGE) and various infographic analyses. The outcomes demonstrate that the C3 combination includes the elapsed time, emitter outflow rate (Q), initial soil moisture rate (θ1/θ2), saturated hydraulic conductivity ratio (Ks1/Ks2), the ratio of irrigation time in a cycle to the entire cycle time (Tirr/Ttot), and the ratio of silt (Silt1/Silt2) with Bi-GRU yielded better accuracy in terms of (R = 0.994, RMSE = 1.295 cm, and KGE = 0.989) for D-scenario, (R = 0.995, RMSE = 1.489 cm, and KGE = 0.992) for V-scenario, and for A-scenario (R = 0.996, RMSE = 76.624 cm2, and KGE = 0.976) than MLP and Adaboost models.