Traditional Riemann solvers fall into two broad categories: exact solvers, which require multiple iterations to achieve high accuracy, and approximate linearized solvers, which achieve fast runtime at the expense of reduced accuracy. Here we explore learning-based Riemann solvers, termed FluxNets, as a new approach that balances accuracy and computational efficiency especially for transcritical and supercritical flow problems involving non-ideal thermodynamics. Various designs and training strategies are applied to fully connected neural networks. By comparing data-driven versus physics-informed loss functions, as well as neural networks of varying size, the results show that order-of-magnitude reductions in error compared to the Roe solver can be achieved with compact architectures. Numerical validation on 1D and 2D test cases reveals that a physics-informed FluxNet promotes smoothness, numerical stability, and generalizability, and is capable of producing entropy-stable results free of non-physical expansion shocks as compared to the Roe solver. Parallelization can be leveraged to accelerate inference such that the gains in accuracy are achieved at one quarter the runtime of exact solvers. The trade-off in accuracy versus efficiency may be justified in the case of non-ideal flows where even minor errors can result in spurious oscillations and instabilities.