Abstract

A Tsallis-statistics-based generalization of the gradient descent dynamics (using non- extensive cost functions), recently introduced by one of us, is proposed as a learning rule in a simple perceptron. The resulting Langevin equations are solved numerically for different values of an index q (q = 1 and q ≠ 1 respectively correspond to the extensive and non-extensive cases) and for different cost functions. The results are compared with the learning curve (mean error versus time) obtained from a learning experiment carried out with human beings, showing an excellent agreement for values of q slightly above unity. This fact illustrates the possible importance of including some degree of non-locality (non-extensivity) in computational learning procedures, whenever one wants to mimic human behaviour.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.