MRPB: Memory Request Prioritization for Massively Parallel Processors
2014 20TH IEEE INTERNATIONAL SYMPOSIUM ON HIGH PERFORMANCE COMPUTER ARCHITECTURE (HPCA-20)(2014)
Key words
cache storage,graphics processing units,parallel processing,performance evaluation,GPU cache management technique,GPU caches,GPU performance,GPU programming,MRPB,PolyBench suites,Rodinia suites,address spaces,cache bypassing,caching efficiency,graphics processing units,hardware structure,limited per-thread cache capacity,massively parallel processors,massively parallel throughput-oriented systems,memory access latency,memory access traffic,memory hierarchies,memory request prioritization,memory request prioritization buffer,prioritization methods,request reordering,simulated L1 cache,thread counts
AI Read Science
Must-Reading Tree
Example

Generate MRT to find the research sequence of this paper
Chat Paper
Summary is being generated by the instructions you defined