The link between the structure of a neural network and its attractor states is investigated, with a view to designing associative memories based on such networks. It is shown that, for any preassigned set of states to be memorized, the parameters of the network can be completely calculated in most cases so as to guarantee the stability of these states. The spin glass formulation of the neural network problem leads to particularly simple results which, in some cases, allow an analytical evaluation of the attractivity of the memorized states Dans la perspective de la realisation de memoires associatives a l'aide de reseaux de neurones, nous etudions la relation entre la structure d'un reseau et ses etats attracteurs; nous montrons que, quel que soit l'ensemble des etats que l'on desire memoriser, il est generalement possible de calculer tous les parametres du reseau de facon a assurer la stabilite de ces etats. Le formalisme des verres de spins conduit a des resultats particulierement simples qui permettent, dans certains cas, d'evaluer analytiquement leur attractivite