EUROMICRO Conference
Download PDF

Abstract

Memory Gap has become an essential factor influencing on achieving peak performance of high speed CPU based systems. To fill this gap, enlarging cache capacity is a traditional method formed from program locality principle. However, the order of instructions stored in I-Cache before being sent to Data Processing Unit (DPU) is a kind of useful information that never has been utilized. So we propose an architecture containing an Instruction Processing Unit (IPU) in parallel with the ordinary DPU, the IPU can prefetch, analyze and preprocess a large amount of instructions otherwise lay in the I-Cache untouched. It would be more efficient than the conventional prefetch buffer that can only store several instructions for previewing. By IPU, Load Instructions can be preprocessed while the DPU is executing on data simultaneously. We term it as Lookahead Cache. This paper describes the principle of Lookahead Cache, presents the idea of dynamic program locality and illustrates quantitative parameters for evaluation. Tools for simulating the Lookahead Cache were developed. Simulation result shows that it can improve the program locality, and hence improve the cache hit ratio during program execution without further enlarging the on-chip cache that occupies a large portion of chip area.
Like what you’re reading?
Already a member?
Get this article FREE with a new membership!