Abstract

In this journal, Cheng has proposed a backpropagation (BP) procedure called BPFCC for deep fully connected cascaded (FCC) neural network learning in comparison with a neuron-by-neuron (NBN) algorithm of Wilamowski and Yu. Both BPFCC and NBN are designed to implement the Levenberg-Marquardt method, which requires an efficient evaluation of the Gauss-Newton (approximate Hessian) matrix ∇rT∇r\\documentclass[12pt]{minimal} \\usepackage{amsmath} \\usepackage{wasysym} \\usepackage{amsfonts} \\usepackage{amssymb} \\usepackage{amsbsy} \\usepackage{mathrsfs} \\usepackage{upgreek} \\setlength{\\oddsidemargin}{-69pt} \\begin{document}$$\ abla \ extbf{r}^\ extsf{T} \ abla \ extbf{r}$$\\end{document}, the cross product of the Jacobian matrix ∇r\\documentclass[12pt]{minimal} \\usepackage{amsmath} \\usepackage{wasysym} \\usepackage{amsfonts} \\usepackage{amssymb} \\usepackage{amsbsy} \\usepackage{mathrsfs} \\usepackage{upgreek} \\setlength{\\oddsidemargin}{-69pt} \\begin{document}$$\ abla \ extbf{r}$$\\end{document} of the residual vector r\\documentclass[12pt]{minimal} \\usepackage{amsmath} \\usepackage{wasysym} \\usepackage{amsfonts} \\usepackage{amssymb} \\usepackage{amsbsy} \\usepackage{mathrsfs} \\usepackage{upgreek} \\setlength{\\oddsidemargin}{-69pt} \\begin{document}$$\ extbf{r}$$\\end{document} in nonlinear least squares sense. Here, the dominant cost is to form ∇rT∇r\\documentclass[12pt]{minimal} \\usepackage{amsmath} \\usepackage{wasysym} \\usepackage{amsfonts} \\usepackage{amssymb} \\usepackage{amsbsy} \\usepackage{mathrsfs} \\usepackage{upgreek} \\setlength{\\oddsidemargin}{-69pt} \\begin{document}$$\ abla \ extbf{r}^\ extsf{T} \ abla \ extbf{r}$$\\end{document} by rank updates on each data pattern. Notably, NBN is better than BPFCC for the multiple q(>1)\\documentclass[12pt]{minimal} \\usepackage{amsmath} \\usepackage{wasysym} \\usepackage{amsfonts} \\usepackage{amssymb} \\usepackage{amsbsy} \\usepackage{mathrsfs} \\usepackage{upgreek} \\setlength{\\oddsidemargin}{-69pt} \\begin{document}$$q~\\!(>\\!1)$$\\end{document}-output FCC-learning when q rows (per pattern) of the Jacobian matrix ∇r\\documentclass[12pt]{minimal} \\usepackage{amsmath} \\usepackage{wasysym} \\usepackage{amsfonts} \\usepackage{amssymb} \\usepackage{amsbsy} \\usepackage{mathrsfs} \\usepackage{upgreek} \\setlength{\\oddsidemargin}{-69pt} \\begin{document}$$\ abla \ extbf{r}$$\\end{document} are evaluated; however, the dominant cost (for rank updates) is common to both BPFCC and NBN. The purpose of this paper is to present a new more efficient stage-wise BP procedure (for q-output FCC-learning) that reduces the dominant cost with no rows of ∇r\\documentclass[12pt]{minimal} \\usepackage{amsmath} \\usepackage{wasysym} \\usepackage{amsfonts} \\usepackage{amssymb} \\usepackage{amsbsy} \\usepackage{mathrsfs} \\usepackage{upgreek} \\setlength{\\oddsidemargin}{-69pt} \\begin{document}$$\ abla \ extbf{r}$$\\end{document} explicitly evaluated, just as standard BP evaluates the gradient vector ∇rTr\\documentclass[12pt]{minimal} \\usepackage{amsmath} \\usepackage{wasysym} \\usepackage{amsfonts} \\usepackage{amssymb} \\usepackage{amsbsy} \\usepackage{mathrsfs} \\usepackage{upgreek} \\setlength{\\oddsidemargin}{-69pt} \\begin{document}$$\ abla \ extbf{r}^\ extsf{T} \ extbf{r}$$\\end{document} with no explicit evaluation of any rows of the Jacobian matrix ∇r\\documentclass[12pt]{minimal} \\usepackage{amsmath} \\usepackage{wasysym} \\usepackage{amsfonts} \\usepackage{amssymb} \\usepackage{amsbsy} \\usepackage{mathrsfs} \\usepackage{upgreek} \\setlength{\\oddsidemargin}{-69pt} \\begin{document}$$\ abla \ extbf{r}$$\\end{document}.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call