Image processing is an essential first step towards fully utilizing robotics, deep learning, and machine learning techniques. Using techniques like image enhancement, restoration, and segmentation are able to extract pertinent information from images and use it for task execution and decision-making. However, the hardware implementation of these algorithms demands more delay, area, and power. This work proposes a new mantissa bit-size reduced half-precision floating-point format for processing and characterizing image pixels for machine learning algorithms. In the realm of imaging, lowering the mantissa bit-size in floating-point can conserve area and power when utilized for internal calculations. Together with the area-power reduction, there is also a progressive reduction in image quality. For any image application, it becomes necessary to monitor the area and power trade-offs related to the amount of bits used to process the raw data. This work assists in selecting the bit-size for internal computations based on the accuracy requirements of the application by reporting the area, and power for various bit size reductions. The updated pixel values after applying the mantissa bit-size reduction are displayed in this study, along with a theoretical explanation of its inaccuracy. Since multipliers and adders are needed for the majority of mathematical equations in machine learning image algorithms, they are developed later in this work to process the image. The processed image is based on various adjusted pixel values, and the experimental findings demonstrate 75.2% to 21.3% area optimization, and 66.43% to 20.4% power optimization. However, determining the PSNR and MSE values of the processed image allows for quality validation.