Mobile Manipulators (MoMa) is a category of mobile robots designed to assist people with motor disabilities to perform object retrieval tasks using a webcam-based gaze control system. Using off-the-shelf components such as reproducible acrylic and 3D-printed plates, and a webcam for eye tracking, MoMa serves as an inexpensive, open-source, and customizable solution in assistive robotics. The robotic system consists of a mobile base that can move forward and backward, as well as turn in place; and a 2-axis cartesian arm equipped with a claw gripper that opens and closes. The simple movement of the robot also allows for a simple control method and graphical user interface (GUI). The user receives information about what is in front of the robot through a mounted camera, and, by looking at parts of the screen that correspond to controls, has their gaze predicted by a convolutional neural network and sends commands wirelessly. The performance of the entire system has been validated through testing of the gaze prediction model, the integration of the control system, as well as its task completion capabilities. All the design, construction and software files are freely available online under the CC BY 4.0 license at https://doi.org/10.17632/k7yfn6wdv7.2.