Advancement in modern computing solutions is hugely driven by the requirements of Machine learning applications. They are computationally intensive and occupy a large memory footprint. Traditional Von-Neumann architectures fail to keep up with the demands of this domain of application due to the memory bandwidth bottleneck. In-memory processing solutions can overcome the bottleneck by exploiting abundant bandwidth inside the memory modules. Existing work accelerates neural network applications by moving the compute units all the way up to bit-lines of LLC or DRAM. While these in-memory solutions focus on the computational model inside the memory, they take a naive approach when it comes to data management, which eventually becomes a bottleneck to the performance. We propose a processing in-memory based accelerator for neural network applications incorporating systolic dataflow to efficiently manage the data movement. The proposed architecture is also reconfigurable i.e. supports systolic arrays of various topologies. We demonstrate the efficiency of the architecture by evaluating it across CNNs, RNNs and Transformer models of varying sizes.