Hyperdimensional (HD) computing is a brain inspired learning method that is widely employed in resource-constrained applications such as the internet of things (IoT) regarding its lightweight computation. Although HD computing has high efficiency in the IoT applications, it suffers from high computation due to the large vector size. Thus, several studies are proposed to speedup and increase the efficiency of HD computing. In this work, we propose a method to process HD computing called PartialHD. Our method divides a long hypervector into multiple partial vectors, and process each vector separately. In the retraining phase, this method improves accuracy up to 1.93%. Moreover, by employing our proposed method in the retraining and inference, only a few partial vectors participate and this causes the computational overhead reduces in these phases. The evaluation shows that our method processes on average 22.2% and 36.23% of the entire hypervectors with negligible accuracy loss in inference and retraining, respectively. Furthermore, we propose two general architectures (light-weight and high-speed), which accelerate partial vector computations on different FPGA platforms. The result shows that the light-weight architecture can accelerate HD computing on resource-constrained FPGAs such as Artix while the high-speed architecture has 4.72× higher throughput than light-weight architecture.