The development of photonic technologies for machine learning is a promising avenue toward reducing the computational cost of image classification tasks. Here we investigate a convolutional neural network (CNN) where the first layer is replaced by an image sensor array consisting of recently developed angle-sensitive metasurface photodetectors. This array can visualize transparent phase objects directly by recording multiple anisotropic edge-enhanced images, analogous to the feature maps computed by the first convolutional layer of a CNN. The resulting classification performance is evaluated for a realistic task (the identification of transparent cancer cells from seven different lines) through computational-imaging simulations based on the measured angular characteristics of prototype devices. Our results show that this hybrid optoelectronic network can provide accurate classification (>90%) similar to its fully digital baseline CNN but with an order-of-magnitude reduction in the number of calculations.