This study presents the utilisation of neural-network for bi-level image compression. In the proposed lossy compression method, the locations of pixels of image are applied to the inputs of a multilayer perceptron neural-network. The output of the network denotes the pixel intensity (0 or 1). The final weights of the trained neural-network are quantised, represented by a few bits, Huffman encoded and then stored as the compressed image. In the decompression phase, by applying the pixels locations to the trained network, the output determines the intensity. The results of experiments on more than 4000 different images indicate higher compression rate of the proposed structure compared with the commonly used methods such as Comite Consultatif International Telephonique of Telegraphique (CCITT) G4 and joint bi-level image experts group (JBIG2) standards. Moreover, quantisation issue in neural-network deployment is addressed and a solution is proposed. Further, an adaptive technique based on binary image characteristics is applied to achieve more compression rates.