Prior-knowledge-based feedforward networks have shown superior performance in modeling chemical processes. In this paper, an improved differential evolution (IDEP) algorithm is proposed to encode prior knowledge simultaneously into networks in training process. With regard to monotonic prior knowledge, IDEP algorithm employs a flip operation to adjust those prior-knowledge-violating networks to conform to the monotonicity. In addition, two strategies, Levenberg–Marquardt descent (LMD) strategy and random perturbation (RP) strategy, are adopted to speed up the differential evolution (DE) in the algorithm and prevent it from being trapped by some local minimums, respectively. To demonstrate the IDEP algorithm's efficiency, we apply it to model two chemical curves with the increasing monotonicity constraint. For comparison, four network-training algorithms without prior-knowledge constraints, as well as three existing prior-knowledge-based algorithms (which have some relationship and similarities with IDEP algorithm), are employed to solve the same problems. The simulation results show that IDEP's performance is better than all other algorithms. As a conclusion, IDEP algorithm and its promising prospective will be discussed in detail at the end of this paper.