Dropout is an effective method of mitigating over-fitting while training deep neural networks (DNNs). This method consists of switching off (dropping) some of the neurons of the DNN and training it by keeping the remaining neurons active. This approach makes the DNN general and resilient to changes in its inputs. However, the probability of a neuron belonging to a layer to be dropped, the ’dropout rate’, is a hard-to-tune parameter that affects the performance of the trained model. Moreover, there is no reason, besides being more practical during parameter tuning, why the dropout rate should be the same for all neurons across a layer. This paper proposes a novel method to guide the dropout rate based on an evolutionary algorithm. In contrast to previous studies, we associate a dropout with each individual neuron of the network, thus allowing more flexibility in the training phase. The vector encoding the dropouts for the entire network is interpreted as the candidate solution of a bi-objective optimisation problem, where the first objective is the error reduction due to a set of dropout rates for a given data batch, while the second objective is the distance of the used dropout rates from a pre-arranged constant. The second objective is used to control the dropout rates and prevent them from becoming too small, hence ineffective; or too large, thereby dropping a too-large portion of the network. Experimental results show that the proposed method, namely GADropout, produces DNNs that consistently outperform DNNs designed by other dropout methods, some of them being modern advanced dropout methods representing the state-of-the-art. GADroput has been tested on multiple datasets and network architectures.