PROJECT TITLE :
A Cache Hierarchy Aware Thread Mapping Methodology for GPGPUs
The recently proposed GPGPU architecture has added a multi-level hierarchy of shared cache to higher exploit the info locality of general purpose applications. The GPGPU design philosophy allocates most of the chip area to processing cores, and therefore results in a relatively small cache shared by a large number of cores when compared with standard multi-core CPUs. Applying a correct thread mapping theme is crucial for gaining from constructive cache sharing and avoiding resource rivalry among thousands of threads. However, due to the significant variations on architectures and programming models, the prevailing thread mapping approaches for multi-core CPUs don't perform as effective on GPGPUs. This paper proposes a formal model to capture each the characteristics of threads likewise because the cache sharing behavior of multi-level shared cache. With acceptable proofs, the model forms a solid theoretical foundation beneath the proposed cache hierarchy aware thread mapping methodology for multi-level shared cache GPGPUs. The experiments reveal that the three-staged thread mapping methodology will successfully improve the data reuse on each cache level of GPGPUs and achieve a mean of two.3× to 4.3× runtime enhancement when put next with existing approaches.
Did you like this research project?
To get this research project Guidelines, Training and Code... Click Here