This new work is an extension of existing research into artificial neural networks (Neville and Stonham, Connection Sci.: J. Neural Comput. Artif. Intell. Cognitive Res., 7, pp. 29–60, 1995; Neville, Neural Net., 45, pp. 375–393, 2002b). These previous studies of the reuse of information (Neville, IEEE World Congress on Computational Intelligence, 1998b, pp. 1377–1382; Neville and Eldridge, Neural Net., pp. 375–393, 2002; Neville, IEEE World Congress on Computational Intelligence, 1998c, pp. 1095–1100; Neville, IEEE 2003 International Joint Conference on Neural Networks, 2003; Neville, IEEE IJCNN'04, 2004 International Joint Conference on Neural Networks, 2004) are associated with a methodology that prescribes the weights, as opposed to training them. In addition, they work with smaller networks. Here, this work is extended to include larger nets. This methodology is considered in the context of artificial neural networks: geometric reuse of information is described mathematically and then validated experimentally. The theory shows that the trained weights of a neural network can be used to prescribe the weights of other nets of the same architecture. Hence, the other nets have prescribed weights that enable them to map related geometric functions. This means the nets are a method of ‘reuse of information’. This work is significant in that it validates the statement that, ‘knowledge encapsulated in a trained multi-layer sigma-pi neural network (MLSNN) can be reused to prescribe the weights of other MLSNNs which perform similar tasks or functions’. The important point to note here is that the other MLSNNs weights are prescribed in order to represent related functions. This implies that the knowledge encapsulated in the initially trained MLSNN is of more use than may initially appear.