Soft actuators and their sensors have always been separate entities with two distinct roles. The omnidirectional compliance of soft robots thus means that multiple sensors have to be used to sense different modalities in the respective planes of motion. With the recent emergence of self-sensing actuators, the two roles have gradually converged to simplify sensing requirements. Self-sensing typically involves embedding a conductive sensing element into the soft actuator and provides multiple state information along the continuum. However, most of these self-sensing actuators are fabricated through manual methods, which results in inconsistent sensing performance. Soft material compliance also imply that both actuator and sensor exhibit nonlinear behaviors during actuation, making sensing more complex. In this regard, machine learning has shown promise in characterizing the nonlinear behavior of soft sensors. Beyond characterization, we show that applying machine learning to soft actuators eliminates the need to implant a sensing element to achieve self-sensing. Fabrication is done using 3D printing, thus ensuring that sensing performance is consistent across the actuators. In addition, our proposed technique is able to estimate the bending curvature of a soft continuum actuator and the external forces applied to the tip of the actuator in real time. Our methodology is generalizable and aims to provide a novel way of multimodal sensing for soft robots across a variety of applications.