Options
A quantitative study of locality in GPU caches for memory-divergent workloads
Citation Link: https://doi.org/10.15480/882.4333
Publikationstyp
Journal Article
Date Issued
2022-04-05
Sprache
English
Institut
TORE-DOI
Volume
50
Issue
2
Start Page
189
End Page
216
Citation
International Journal of Parallel Programming 50 (2): 189-216 (2022)
Publisher DOI
Scopus ID
Publisher
Springer Science + Business Media B.V.
GPUs are capable of delivering peak performance in TFLOPs, however, peak performance is often difficult to achieve due to several performance bottlenecks. Memory divergence is one such performance bottleneck that makes it harder to exploit locality, cause cache thrashing, and high miss rate, therefore, impeding GPU performance. As data locality is crucial for performance, there have been several efforts to exploit data locality in GPUs. However, there is a lack of quantitative analysis of data locality, which could pave the way for optimizations. In this paper, we quantitatively study the data locality and its limits in GPUs at different granularities. We show that, in contrast to previous studies, there is a significantly higher inter-warp locality at the L1 data cache for memory-divergent workloads. We further show that about 50% of the cache capacity and other scarce resources such as NoC bandwidth are wasted due to data over-fetch caused by memory divergence. While the low spatial utilization of cache lines justifies the sectored-cache design to only fetch those sectors of a cache line that are needed during a request, our limit study reveals the lost spatial locality for which additional memory requests are needed to fetch the other sectors of the same cache line. The lost spatial locality presents opportunities for further optimizing the cache design.
Subjects
Data locality
GPU caches
Memory divergence
DDC Class
600: Technik
Publication version
publishedVersion
Loading...
Name
Lal2022_Article_AQuantitativeStudyOfLocalityIn.pdf
Size
2.47 MB
Format
Adobe PDF