Exploiting Fine-Grained Memory Locality with Predictive Dispatch

Future process technologies promise to complicate and eventually break the scaling of traditional 6T SRAM cell memory arrays. In this work, we present an analysis of the microarchitectural impact, and propose possible solutions based on workload characteristics of a broad range of workloads (SPEC, transaction processing, desktop and media).

We identify the sensitivity of the workloads to both cache latency and bandwidth, and consider how different SRAM design choices will impact workload performance. We show that, while the SPECcpu benchmarks are primarily throughput dominated, transaction processing and many desktop workloads show a higher sensitivity to cache access latency. We propose a method for dynamically exploiting fine-grained memory locality by reusing data stored in sense amp latches to improve available memory bandwidth. Finally, we propose and evaluate a predictive dispatch strategy, to recover cache bandwidth in a latency-focused design with a instruction flush and re-issue recovery policy.

By: Michael Gschwind, John-David Wellman

Published in: RC23633 in 2004


This Research Report is available. This report has been submitted for publication outside of IBM and will probably be copyrighted if accepted for publication. It has been issued as a Research Report for early dissemination of its contents. In view of the transfer of copyright to the outside publisher, its distribution outside of IBM prior to publication should be limited to peer communications and specific requests. After outside publication, requests should be filled only by reprints or legally obtained copies of the article (e.g., payment of royalties). I have read and understand this notice and am a member of the scientific community outside or inside of IBM seeking a single copy only.


Questions about this service can be mailed to reports@us.ibm.com .