Due to their applications in ground breaking technologies such as autonomic drive, biosecurity imaging and natural language processing, convolutionary neural networks become increasingly common. The complexity of the underlying algorithms is also increased with this growth in adoption. This pattern often includes repercussions for calculation platforms, i.e. The memory access control unit GPUs, FPGA or ASIC-based accelerators are in particular, the AGU (Address Generation unit). Current accelerators typically have Datapath AGUs that have limited adaptability in algorithm evolution. For new algorithms, new hardware is needed, which in time, energy and reusability is an extremely inefficient method. In this research 6 algorithms with various hardware consequences are evaluated and a completely programmable AGU (PAGU) that can be adapted to those algorithms is presented. The results are presented. These are normal, screwed, dilated, sampled and padded turf, and Max Pooling algorithms. The proposed AGU architecture is a very long instructions Word based Application Specific Direction Processor with specialised components, including hardware counters and zero overhead loops, as well as a powerful instruction set architecture (ISA). The goal was to reduce flexibility vs. area, capability and productivity. Results show that PAGU shows nearly an excellent efficiency, one cycle per address, for every algorithm under consideration except Up sampled Convolution for which 1,7 cycles per address are used for a working test network of Semantic Segmentation. The PAGU area is roughly. 4.6 times larger than the Data path method, which is nevertheless acceptable due to the high degree of versatility.