Pre-programming complex robotic systems to operate in unstructured environments is extremely difficult because of the programmer’s inability to predict future operating conditions in the face of unforeseen environmental conditions, mechanical wear of parts, etc. The solution to this problem is for the robot controller to learn on-line about its own capabilities and limitations when interacting with its environment. At the present state of technology, this poses a challenge to existing machine learning methods. We study this problem using a simple two-fingered gripper which learns to grasp an object with appropriate force, without slip while minimising chances of damage to the object. Three machine learning methods are used to produce a neurofuzzy controller for the gripper. These are off-line supervised neurofuzzy learning and two on-line methods, namely unsupervised reinforcement learning and an unsupervised/supervised hybrid. With the two on-line methods, we demonstrate that the controller can learn through interaction with its environment to overcome simulated failure of its sensors. Further, the hybrid is shown to out perform reinforcement learning alone in terms of faster adaptation to the changing circumstances of sensor failure. The hybrid learning scheme allows us to make best use of such pre-labeled datasets as might exist and to remember effectively good control actions discovered by reinforcement learning.