Abstract

A neural network is often used as a black box to approximate an unknown input-output relationship. One potential danger is that, if not used properly, a neural network may be mistrained and provide a false relationship between input and output data. To find a solution to this mistraining problem, we relate neural network training to commonly used constrained optimization. By analogy to constrained optimization, we apply damping and first and second derivative smoothing constraints to avoid over-training the neural network. Tests of porosity prediction via neural network indicate that incorporating damping and smoothing into the training process can effectively avoid over-training, and enhance the prediction power. We also demonstrate that adding random noise to the input data has the effect of adding a first derivative smoothing constraint.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call