In this article, an iterative-learning-based robotic controller is developed, which aims at providing a prescribed assistance or resistance force to the human user. In the proposed controller, the characteristic parameter of the human upper limb movement is first learned by the robot using the measurable interaction force, a recursive least square (RLS)-based estimator, and the Adam optimization method. Then, the desired trajectory of the robot can be obtained, tracking which the robot can supply the human’s upper limb with a prescribed interaction force. Using this controller, the robot automatically adjusts its reference trajectory to embrace the differences between different human users with diverse degrees of upper limb movement characteristics. By designing a performance index in the form of interaction force integral, potential adverse effects caused by the time-related uncertainty during the learning process can be addressed. The experimental results demonstrate the effectiveness of the proposed method in supplying the prescribed interaction force to the human user. Note to Practitioners—This article concentrates on developing a novel control technique to make the robot supply a prescribed interaction force to the human user in the presence of time-related uncertainties. The proposed control method is applicable to various scenarios of the human–robot interaction, e.g., it can be used for rehabilitation robots to provide assistive or resistive force to stroke patients or for exoskeleton robots to provide assistive force to human users for completing heavy-load tasks. Moreover, the desired interaction force can be tailored for different human users according to their needs and different task objectives. Consequently, the proposed controller can serve diverse users and has a promising perspective in automation.