Address Code and Arithmetic Optimizations for Embedded Systems

An important class of problems used widely in both the embedded systems and scientific domains perform memory intensive compu-tations on large data sets. These data sets get to be typically stored in main memory, which means that the compiler needs to gener-ate the address of a memory location in or...

Full description

Bibliographic Details
Other Authors: The Pennsylvania State University CiteSeerX Archives
Format: Text
Language:English
Subjects:
Online Access:http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.523.9614
http://www.cs.york.ac.uk/rts/docs/SIGDA-Compendium-1994-2004/papers/2002/aspdac02/pdffiles/07b_2.pdf
Description
Summary:An important class of problems used widely in both the embedded systems and scientific domains perform memory intensive compu-tations on large data sets. These data sets get to be typically stored in main memory, which means that the compiler needs to gener-ate the address of a memory location in order to store these data elements and generate the same address again when they are sub-sequently retrieved. This memory address computation is quite expensive, and if it is not performed efficiently, the performance degrades significantly. In this paper, we have developed a new compiler approach for optimizing the memory performance of sub-scripted or array variables and their address generation in stencil problems that are common in embedded image processing and other applications. Our approach makes use of the observation that in all these stencils, most of the elements accessed are stored close to one other in memory. We try to optimize the stencil codes with a view of reducing both the arithmetic and the address computation overhead. The regularity of the access pattern and the reuse of data elements between successive iterations of the loop body means that there is a common sub-expression between any two successive iterations; these common sub-expressions are difficult to detect using state-of-the-art compiler technology. If we were to store the value of the common sub-expression in a scalar, then for the next iteration, the value in this scalar could be used instead of performing the compu-tation all over again. This greatly reduces the arithmetic overhead. Since we store only one scalar in a register, there is almost no reg-ister pressure. Also all array accesses are now replaced by pointer dereferences, where the pointers are incremented after each itera-tion. This reduces the address computation overhead. Our solution is the only one so far to exploit both scalar conversion and common sub-expressions. Extensive experimental results on several codes show that our approach performs better than the other ...