In automated production using collaborative robots in a manufacturing cell, a crucial aspect is to avoid collisions to ensure the safety of workers and robots in human–robot interaction. One approach for detecting collisions is using the swept volume (SV) to identify safe protective space for operation. We learn an accurate and reliable signed distance function (SDF) network from raw point clouds of a pre-computed SV to represent a class of linear joint motion trajectories. The network requires only a set of parameters and constant execution time, thus reducing the computational time and memory of collision checking due to the complexity of explicit geometry during task execution. The distance of collision danger foresaw by the learned SDF is exploited to reduce the frequency of collision detection calls in the dynamic environment and reduce the computational cost further. We assess the relative merits of the implicit neural representation of multiple SVs in terms of F1-score, error distance from the surface of the truth geometry, and 3D visualization by comparing favorably with a binary voxel network for learning a single SV with similar inference time. All the predicted errors of the geometry lie within a distance of 4 voxels from the surface of the truth geometry, and most reconstruction errors are within 3 voxels. A simulation of pick-and-place task execution in the human–robot interaction scenarios by leveraging the learned SDF as an efficient continuous collision detector is performed. The improvement in execution time and collision detection number is validated in the simulation.