Abstract

In this expository paper, we want to give a brief introduction, with few key references for further reading, to the inner functioning of the new and successful algorithms of Deep Learning and Geometric Deep Learning with a focus on Graph Neural Networks. We go over the key ingredients for these algorithms: the score and loss function and we explain the main steps for the training of a model. We do not aim to give a complete and exhaustive treatment, but we isolate few concepts to give a fast introduction to the subject. We provide some appendices to complement our treatment discussing Kullback–Leibler divergence, regression, Multi-layer Perceptrons and the Universal Approximation theorem.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call