Deep neural networks have been extensively used for the solution of both the forward and the inverse problem for dynamical systems. However, their implementation necessitates optimizing a high-dimensional space of parameters and hyperparameters. This fact, along with the requirement of substantial computational resources, pose a barrier to achieving high numerical accuracy, but also interpretability. Here, to address the above challenges, we present Random Projection-based Operator Networks (RandONets): shallow networks with random projections and tailor-made numerical analysis methods that learn accurately and fast linear and nonlinear operators. Building on previous works, we prove that RandOnets are universal approximators of linear and nonlinear operators. Due to their simplicity, RandONets provide a one-step transformation of the input space, facilitating interpretability. For the evaluation of their performance, we focus on operators of PDEs. We show, that RandONets outperform by several orders of magnitude, both in terms of numerical approximation accuracy and computational cost, the “vanilla” DeepONets. Hence, we believe that our method will trigger further developments in the field of scientific machine learning, for the development of new ‘’light”schemes that will provide high accuracy while reducing dramatically the computational cost. A MATLAB toolbox for RandONets, including demos, is available on GitHub at https://github.com/GianlucaFabiani/RandONets.