Generative neural networks (GNNs) have successfully used human-created designs to generate novel 3D models that combine concepts from disparate known solutions, which is an important aspect of design exploration. GNNs automatically learn a parameterization (orlatent space) of a design space, as opposed to alternative methods that manually define a parameterization. However, GNNs are typically not evaluated using an explicit notion of physical performance, which is a critical capability needed for design. This work bridges this gap by proposing a method to extract a set of functional designs from the latent space of a point cloud generating GNN, without sacrificing the aforementioned aspects of a GNN that are appealing for design exploration. We introduce a sparsity preserving cost function and initialization strategy for a genetic algorithm (GA) to optimize over the latent space of a point cloud generating autoencoder GNN. We examine two test cases, an example of generating ellipsoid point clouds subject to a simple performance criterion and a more complex example of extracting 3D designs with a low coefficient of drag. Our experiments show that the modified GA results in a diverse set of functionally superior designs while maintaining similarity to human-generated designs in the training data set.