Modern mobile processors offer dynamic voltage and frequency scaling, which can be used to reduce the energy requirements of embedded and real-time applications by exploiting idle CPU resources, while still maintaining all applications' real-time characteristics. However, accurate predictions of task run-times are key to computing the frequencies and voltages that ensure that all tasks' real-time constraints are met. Past work has used feedback-based approaches, where applications' past CPU utilizations are used to predict future CPU requirements. Inaccurate predictions in these approaches can lead to missed deadlines, less than expected energy savings, or large overheads due to frequent voltage and frequency changes. Previous solutions ignore other `indicators' of future CPU requirements, such as the frequency of I/O operations, memory accesses, or interrupts. This paper addresses this shortcoming for memory-intensive applications, where measured task run-times and cache miss rates are used as feedback for accurate run-time predictions. Cache miss rates indicate the frequency of memory accesses and enable us to derive the latencies introduced by these operations. The results shown in this paper indicate improvements in the number of deadlines met and the amount of energy saved.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.